00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2340 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3601 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.194 > git --version # 'git version 2.39.2' 00:00:00.194 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.220 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.220 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.586 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.599 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.610 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.610 > git config core.sparsecheckout # timeout=10 00:00:06.620 > git read-tree -mu HEAD # timeout=10 00:00:06.636 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.654 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.654 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.734 [Pipeline] Start of Pipeline 00:00:06.746 [Pipeline] library 00:00:06.747 Loading library shm_lib@master 00:00:06.747 Library shm_lib@master is cached. Copying from home. 00:00:06.762 [Pipeline] node 00:00:06.780 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.781 [Pipeline] { 00:00:06.788 [Pipeline] catchError 00:00:06.789 [Pipeline] { 00:00:06.797 [Pipeline] wrap 00:00:06.804 [Pipeline] { 00:00:06.810 [Pipeline] stage 00:00:06.811 [Pipeline] { (Prologue) 00:00:07.039 [Pipeline] sh 00:00:07.319 + logger -p user.info -t JENKINS-CI 00:00:07.335 [Pipeline] echo 00:00:07.336 Node: WFP21 00:00:07.341 [Pipeline] sh 00:00:07.633 [Pipeline] setCustomBuildProperty 00:00:07.643 [Pipeline] echo 00:00:07.644 Cleanup processes 00:00:07.648 [Pipeline] sh 00:00:07.930 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.930 345013 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.943 [Pipeline] sh 00:00:08.227 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.227 ++ grep -v 'sudo pgrep' 00:00:08.227 ++ awk '{print $1}' 00:00:08.227 + sudo kill -9 00:00:08.227 + true 00:00:08.240 [Pipeline] cleanWs 00:00:08.249 [WS-CLEANUP] Deleting project workspace... 00:00:08.249 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.257 [WS-CLEANUP] done 00:00:08.260 [Pipeline] setCustomBuildProperty 00:00:08.270 [Pipeline] sh 00:00:08.548 + sudo git config --global --replace-all safe.directory '*' 00:00:08.647 [Pipeline] httpRequest 00:00:09.303 [Pipeline] echo 00:00:09.305 Sorcerer 10.211.164.101 is alive 00:00:09.314 [Pipeline] retry 00:00:09.316 [Pipeline] { 00:00:09.330 [Pipeline] httpRequest 00:00:09.334 HttpMethod: GET 00:00:09.334 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.335 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.351 Response Code: HTTP/1.1 200 OK 00:00:09.351 Success: Status code 200 is in the accepted range: 200,404 00:00:09.352 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:24.340 [Pipeline] } 00:00:24.356 [Pipeline] // retry 00:00:24.362 [Pipeline] sh 00:00:24.645 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:24.659 [Pipeline] httpRequest 00:00:25.106 [Pipeline] echo 00:00:25.107 Sorcerer 10.211.164.101 is alive 00:00:25.115 [Pipeline] retry 00:00:25.117 [Pipeline] { 00:00:25.129 [Pipeline] httpRequest 00:00:25.133 HttpMethod: GET 00:00:25.134 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:25.134 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:25.155 Response Code: HTTP/1.1 200 OK 00:00:25.156 Success: Status code 200 is in the accepted range: 200,404 00:00:25.156 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:16.829 [Pipeline] } 00:01:16.846 [Pipeline] // retry 00:01:16.853 [Pipeline] sh 00:01:17.140 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:19.689 [Pipeline] sh 00:01:19.973 + git -C spdk log --oneline -n5 00:01:19.973 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:19.973 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:19.973 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:19.973 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:19.973 9469ea403 nvme/fio_plugin: add trim support 00:01:19.984 [Pipeline] } 00:01:19.999 [Pipeline] // stage 00:01:20.007 [Pipeline] stage 00:01:20.010 [Pipeline] { (Prepare) 00:01:20.025 [Pipeline] writeFile 00:01:20.040 [Pipeline] sh 00:01:20.324 + logger -p user.info -t JENKINS-CI 00:01:20.337 [Pipeline] sh 00:01:20.621 + logger -p user.info -t JENKINS-CI 00:01:20.633 [Pipeline] sh 00:01:20.917 + cat autorun-spdk.conf 00:01:20.917 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.917 SPDK_TEST_NVMF=1 00:01:20.917 SPDK_TEST_NVME_CLI=1 00:01:20.917 SPDK_TEST_NVMF_NICS=mlx5 00:01:20.917 SPDK_RUN_UBSAN=1 00:01:20.917 NET_TYPE=phy 00:01:20.925 RUN_NIGHTLY=1 00:01:20.929 [Pipeline] readFile 00:01:20.953 [Pipeline] withEnv 00:01:20.955 [Pipeline] { 00:01:20.967 [Pipeline] sh 00:01:21.252 + set -ex 00:01:21.252 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:21.252 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:21.252 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.252 ++ SPDK_TEST_NVMF=1 00:01:21.252 ++ SPDK_TEST_NVME_CLI=1 00:01:21.252 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:21.252 ++ SPDK_RUN_UBSAN=1 00:01:21.252 ++ NET_TYPE=phy 00:01:21.252 ++ RUN_NIGHTLY=1 00:01:21.252 + case $SPDK_TEST_NVMF_NICS in 00:01:21.252 + DRIVERS=mlx5_ib 00:01:21.252 + [[ -n mlx5_ib ]] 00:01:21.252 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:21.252 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.825 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.825 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.825 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.825 + true 00:01:27.825 + for D in $DRIVERS 00:01:27.825 + sudo modprobe mlx5_ib 00:01:27.825 + exit 0 00:01:27.835 [Pipeline] } 00:01:27.850 [Pipeline] // withEnv 00:01:27.855 [Pipeline] } 00:01:27.869 [Pipeline] // stage 00:01:27.879 [Pipeline] catchError 00:01:27.881 [Pipeline] { 00:01:27.894 [Pipeline] timeout 00:01:27.894 Timeout set to expire in 1 hr 0 min 00:01:27.896 [Pipeline] { 00:01:27.909 [Pipeline] stage 00:01:27.911 [Pipeline] { (Tests) 00:01:27.925 [Pipeline] sh 00:01:28.210 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:28.211 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:28.211 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:28.211 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:28.211 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:28.211 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:28.211 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:28.211 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:28.211 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:28.211 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:28.211 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:28.211 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:28.211 + source /etc/os-release 00:01:28.211 ++ NAME='Fedora Linux' 00:01:28.211 ++ VERSION='39 (Cloud Edition)' 00:01:28.211 ++ ID=fedora 00:01:28.211 ++ VERSION_ID=39 00:01:28.211 ++ VERSION_CODENAME= 00:01:28.211 ++ PLATFORM_ID=platform:f39 00:01:28.211 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:28.211 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.211 ++ LOGO=fedora-logo-icon 00:01:28.211 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:28.211 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.211 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:28.211 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.211 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.211 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.211 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:28.211 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.211 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:28.211 ++ SUPPORT_END=2024-11-12 00:01:28.211 ++ VARIANT='Cloud Edition' 00:01:28.211 ++ VARIANT_ID=cloud 00:01:28.211 + uname -a 00:01:28.211 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:28.211 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:30.749 Hugepages 00:01:30.749 node hugesize free / total 00:01:30.749 node0 1048576kB 0 / 0 00:01:31.008 node0 2048kB 0 / 0 00:01:31.008 node1 1048576kB 0 / 0 00:01:31.008 node1 2048kB 0 / 0 00:01:31.008 00:01:31.008 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.008 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:31.009 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:31.009 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:31.009 + rm -f /tmp/spdk-ld-path 00:01:31.009 + source autorun-spdk.conf 00:01:31.009 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.009 ++ SPDK_TEST_NVMF=1 00:01:31.009 ++ SPDK_TEST_NVME_CLI=1 00:01:31.009 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:31.009 ++ SPDK_RUN_UBSAN=1 00:01:31.009 ++ NET_TYPE=phy 00:01:31.009 ++ RUN_NIGHTLY=1 00:01:31.009 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.009 + [[ -n '' ]] 00:01:31.009 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:31.009 + for M in /var/spdk/build-*-manifest.txt 00:01:31.009 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:31.009 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:31.009 + for M in /var/spdk/build-*-manifest.txt 00:01:31.009 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.009 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:31.268 + for M in /var/spdk/build-*-manifest.txt 00:01:31.268 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.268 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:31.268 ++ uname 00:01:31.268 + [[ Linux == \L\i\n\u\x ]] 00:01:31.268 + sudo dmesg -T 00:01:31.268 + sudo dmesg --clear 00:01:31.268 + dmesg_pid=346476 00:01:31.268 + [[ Fedora Linux == FreeBSD ]] 00:01:31.268 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.268 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.268 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.268 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.268 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.268 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.268 + sudo dmesg -Tw 00:01:31.268 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.268 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.268 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.268 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.268 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.268 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.268 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.268 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.268 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:31.268 Test configuration: 00:01:31.268 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.268 SPDK_TEST_NVMF=1 00:01:31.268 SPDK_TEST_NVME_CLI=1 00:01:31.268 SPDK_TEST_NVMF_NICS=mlx5 00:01:31.268 SPDK_RUN_UBSAN=1 00:01:31.268 NET_TYPE=phy 00:01:31.268 RUN_NIGHTLY=1 23:00:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:31.268 23:00:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.268 23:00:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.268 23:00:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.268 23:00:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.268 23:00:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.268 23:00:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.268 23:00:36 -- paths/export.sh@5 -- $ export PATH 00:01:31.268 23:00:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.268 23:00:36 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:31.268 23:00:36 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:31.268 23:00:36 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730584836.XXXXXX 00:01:31.268 23:00:36 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730584836.g5Vq5U 00:01:31.269 23:00:36 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:31.269 23:00:36 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:31.269 23:00:36 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:31.269 23:00:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:31.269 23:00:36 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.269 23:00:36 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:31.269 23:00:36 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:31.269 23:00:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.269 23:00:36 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:31.269 23:00:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.269 23:00:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.269 23:00:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:31.269 23:00:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.269 Sat Nov 2 10:00:36 PM UTC 2024 00:01:31.269 23:00:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.269 LTS-66-g726a04d70 00:01:31.269 23:00:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:31.269 23:00:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.269 23:00:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.269 23:00:36 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:31.269 23:00:36 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:31.269 23:00:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.269 ************************************ 00:01:31.269 START TEST ubsan 00:01:31.269 ************************************ 00:01:31.269 23:00:36 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:31.269 using ubsan 00:01:31.269 00:01:31.269 real 0m0.000s 00:01:31.269 user 0m0.000s 00:01:31.269 sys 0m0.000s 00:01:31.269 23:00:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.269 23:00:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.269 ************************************ 00:01:31.269 END TEST ubsan 00:01:31.269 ************************************ 00:01:31.269 23:00:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.269 23:00:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.269 23:00:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.269 23:00:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.269 23:00:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.269 23:00:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.269 23:00:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.269 23:00:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.269 23:00:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:31.528 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:31.528 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:31.788 Using 'verbs' RDMA provider 00:01:47.245 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:57.229 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:57.798 Creating mk/config.mk...done. 00:01:57.798 Creating mk/cc.flags.mk...done. 00:01:57.798 Type 'make' to build. 00:01:57.798 23:01:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:57.798 23:01:03 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:57.798 23:01:03 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:57.798 23:01:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.798 ************************************ 00:01:57.798 START TEST make 00:01:57.798 ************************************ 00:01:57.798 23:01:03 -- common/autotest_common.sh@1104 -- $ make -j112 00:01:58.057 make[1]: Nothing to be done for 'all'. 00:02:06.176 The Meson build system 00:02:06.176 Version: 1.5.0 00:02:06.176 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:06.176 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:06.177 Build type: native build 00:02:06.177 Program cat found: YES (/usr/bin/cat) 00:02:06.177 Project name: DPDK 00:02:06.177 Project version: 23.11.0 00:02:06.177 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.177 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.177 Host machine cpu family: x86_64 00:02:06.177 Host machine cpu: x86_64 00:02:06.177 Message: ## Building in Developer Mode ## 00:02:06.177 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.177 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.177 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.177 Program python3 found: YES (/usr/bin/python3) 00:02:06.177 Program cat found: YES (/usr/bin/cat) 00:02:06.177 Compiler for C supports arguments -march=native: YES 00:02:06.177 Checking for size of "void *" : 8 00:02:06.177 Checking for size of "void *" : 8 (cached) 00:02:06.177 Library m found: YES 00:02:06.177 Library numa found: YES 00:02:06.177 Has header "numaif.h" : YES 00:02:06.177 Library fdt found: NO 00:02:06.177 Library execinfo found: NO 00:02:06.177 Has header "execinfo.h" : YES 00:02:06.177 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.177 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.177 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.177 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.177 Run-time dependency openssl found: YES 3.1.1 00:02:06.177 Run-time dependency libpcap found: YES 1.10.4 00:02:06.177 Has header "pcap.h" with dependency libpcap: YES 00:02:06.177 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.177 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.177 Compiler for C supports arguments -Wformat: YES 00:02:06.177 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.177 Compiler for C supports arguments -Wformat-security: NO 00:02:06.177 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.177 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.177 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.177 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.177 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.177 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.177 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.177 Compiler for C supports arguments -Wundef: YES 00:02:06.177 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.177 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.177 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.177 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.177 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.177 Program objdump found: YES (/usr/bin/objdump) 00:02:06.177 Compiler for C supports arguments -mavx512f: YES 00:02:06.177 Checking if "AVX512 checking" compiles: YES 00:02:06.177 Fetching value of define "__SSE4_2__" : 1 00:02:06.177 Fetching value of define "__AES__" : 1 00:02:06.177 Fetching value of define "__AVX__" : 1 00:02:06.177 Fetching value of define "__AVX2__" : 1 00:02:06.177 Fetching value of define "__AVX512BW__" : 1 00:02:06.177 Fetching value of define "__AVX512CD__" : 1 00:02:06.177 Fetching value of define "__AVX512DQ__" : 1 00:02:06.177 Fetching value of define "__AVX512F__" : 1 00:02:06.177 Fetching value of define "__AVX512VL__" : 1 00:02:06.177 Fetching value of define "__PCLMUL__" : 1 00:02:06.177 Fetching value of define "__RDRND__" : 1 00:02:06.177 Fetching value of define "__RDSEED__" : 1 00:02:06.177 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:06.177 Fetching value of define "__znver1__" : (undefined) 00:02:06.177 Fetching value of define "__znver2__" : (undefined) 00:02:06.177 Fetching value of define "__znver3__" : (undefined) 00:02:06.177 Fetching value of define "__znver4__" : (undefined) 00:02:06.177 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.177 Message: lib/log: Defining dependency "log" 00:02:06.177 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.177 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.177 Checking for function "getentropy" : NO 00:02:06.177 Message: lib/eal: Defining dependency "eal" 00:02:06.177 Message: lib/ring: Defining dependency "ring" 00:02:06.177 Message: lib/rcu: Defining dependency "rcu" 00:02:06.177 Message: lib/mempool: Defining dependency "mempool" 00:02:06.177 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.177 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.177 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.177 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.177 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.177 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.177 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:06.177 Compiler for C supports arguments -mpclmul: YES 00:02:06.177 Compiler for C supports arguments -maes: YES 00:02:06.177 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.177 Compiler for C supports arguments -mavx512bw: YES 00:02:06.177 Compiler for C supports arguments -mavx512dq: YES 00:02:06.177 Compiler for C supports arguments -mavx512vl: YES 00:02:06.177 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.177 Compiler for C supports arguments -mavx2: YES 00:02:06.177 Compiler for C supports arguments -mavx: YES 00:02:06.177 Message: lib/net: Defining dependency "net" 00:02:06.177 Message: lib/meter: Defining dependency "meter" 00:02:06.177 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.177 Message: lib/pci: Defining dependency "pci" 00:02:06.177 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.177 Message: lib/hash: Defining dependency "hash" 00:02:06.177 Message: lib/timer: Defining dependency "timer" 00:02:06.177 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.177 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.177 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.177 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.177 Message: lib/power: Defining dependency "power" 00:02:06.177 Message: lib/reorder: Defining dependency "reorder" 00:02:06.177 Message: lib/security: Defining dependency "security" 00:02:06.177 Has header "linux/userfaultfd.h" : YES 00:02:06.177 Has header "linux/vduse.h" : YES 00:02:06.177 Message: lib/vhost: Defining dependency "vhost" 00:02:06.177 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.177 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.177 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.177 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.177 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.177 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.177 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.177 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.177 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.177 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.177 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:06.177 Configuring doxy-api-html.conf using configuration 00:02:06.177 Configuring doxy-api-man.conf using configuration 00:02:06.177 Program mandb found: YES (/usr/bin/mandb) 00:02:06.177 Program sphinx-build found: NO 00:02:06.177 Configuring rte_build_config.h using configuration 00:02:06.177 Message: 00:02:06.177 ================= 00:02:06.177 Applications Enabled 00:02:06.177 ================= 00:02:06.177 00:02:06.177 apps: 00:02:06.177 00:02:06.177 00:02:06.177 Message: 00:02:06.177 ================= 00:02:06.177 Libraries Enabled 00:02:06.177 ================= 00:02:06.177 00:02:06.177 libs: 00:02:06.177 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.177 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.177 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.177 00:02:06.177 Message: 00:02:06.177 =============== 00:02:06.177 Drivers Enabled 00:02:06.177 =============== 00:02:06.177 00:02:06.177 common: 00:02:06.177 00:02:06.177 bus: 00:02:06.177 pci, vdev, 00:02:06.177 mempool: 00:02:06.177 ring, 00:02:06.177 dma: 00:02:06.177 00:02:06.177 net: 00:02:06.177 00:02:06.177 crypto: 00:02:06.177 00:02:06.177 compress: 00:02:06.177 00:02:06.177 vdpa: 00:02:06.177 00:02:06.177 00:02:06.177 Message: 00:02:06.177 ================= 00:02:06.177 Content Skipped 00:02:06.177 ================= 00:02:06.177 00:02:06.177 apps: 00:02:06.177 dumpcap: explicitly disabled via build config 00:02:06.177 graph: explicitly disabled via build config 00:02:06.177 pdump: explicitly disabled via build config 00:02:06.177 proc-info: explicitly disabled via build config 00:02:06.177 test-acl: explicitly disabled via build config 00:02:06.177 test-bbdev: explicitly disabled via build config 00:02:06.177 test-cmdline: explicitly disabled via build config 00:02:06.177 test-compress-perf: explicitly disabled via build config 00:02:06.177 test-crypto-perf: explicitly disabled via build config 00:02:06.177 test-dma-perf: explicitly disabled via build config 00:02:06.177 test-eventdev: explicitly disabled via build config 00:02:06.177 test-fib: explicitly disabled via build config 00:02:06.177 test-flow-perf: explicitly disabled via build config 00:02:06.177 test-gpudev: explicitly disabled via build config 00:02:06.177 test-mldev: explicitly disabled via build config 00:02:06.178 test-pipeline: explicitly disabled via build config 00:02:06.178 test-pmd: explicitly disabled via build config 00:02:06.178 test-regex: explicitly disabled via build config 00:02:06.178 test-sad: explicitly disabled via build config 00:02:06.178 test-security-perf: explicitly disabled via build config 00:02:06.178 00:02:06.178 libs: 00:02:06.178 metrics: explicitly disabled via build config 00:02:06.178 acl: explicitly disabled via build config 00:02:06.178 bbdev: explicitly disabled via build config 00:02:06.178 bitratestats: explicitly disabled via build config 00:02:06.178 bpf: explicitly disabled via build config 00:02:06.178 cfgfile: explicitly disabled via build config 00:02:06.178 distributor: explicitly disabled via build config 00:02:06.178 efd: explicitly disabled via build config 00:02:06.178 eventdev: explicitly disabled via build config 00:02:06.178 dispatcher: explicitly disabled via build config 00:02:06.178 gpudev: explicitly disabled via build config 00:02:06.178 gro: explicitly disabled via build config 00:02:06.178 gso: explicitly disabled via build config 00:02:06.178 ip_frag: explicitly disabled via build config 00:02:06.178 jobstats: explicitly disabled via build config 00:02:06.178 latencystats: explicitly disabled via build config 00:02:06.178 lpm: explicitly disabled via build config 00:02:06.178 member: explicitly disabled via build config 00:02:06.178 pcapng: explicitly disabled via build config 00:02:06.178 rawdev: explicitly disabled via build config 00:02:06.178 regexdev: explicitly disabled via build config 00:02:06.178 mldev: explicitly disabled via build config 00:02:06.178 rib: explicitly disabled via build config 00:02:06.178 sched: explicitly disabled via build config 00:02:06.178 stack: explicitly disabled via build config 00:02:06.178 ipsec: explicitly disabled via build config 00:02:06.178 pdcp: explicitly disabled via build config 00:02:06.178 fib: explicitly disabled via build config 00:02:06.178 port: explicitly disabled via build config 00:02:06.178 pdump: explicitly disabled via build config 00:02:06.178 table: explicitly disabled via build config 00:02:06.178 pipeline: explicitly disabled via build config 00:02:06.178 graph: explicitly disabled via build config 00:02:06.178 node: explicitly disabled via build config 00:02:06.178 00:02:06.178 drivers: 00:02:06.178 common/cpt: not in enabled drivers build config 00:02:06.178 common/dpaax: not in enabled drivers build config 00:02:06.178 common/iavf: not in enabled drivers build config 00:02:06.178 common/idpf: not in enabled drivers build config 00:02:06.178 common/mvep: not in enabled drivers build config 00:02:06.178 common/octeontx: not in enabled drivers build config 00:02:06.178 bus/auxiliary: not in enabled drivers build config 00:02:06.178 bus/cdx: not in enabled drivers build config 00:02:06.178 bus/dpaa: not in enabled drivers build config 00:02:06.178 bus/fslmc: not in enabled drivers build config 00:02:06.178 bus/ifpga: not in enabled drivers build config 00:02:06.178 bus/platform: not in enabled drivers build config 00:02:06.178 bus/vmbus: not in enabled drivers build config 00:02:06.178 common/cnxk: not in enabled drivers build config 00:02:06.178 common/mlx5: not in enabled drivers build config 00:02:06.178 common/nfp: not in enabled drivers build config 00:02:06.178 common/qat: not in enabled drivers build config 00:02:06.178 common/sfc_efx: not in enabled drivers build config 00:02:06.178 mempool/bucket: not in enabled drivers build config 00:02:06.178 mempool/cnxk: not in enabled drivers build config 00:02:06.178 mempool/dpaa: not in enabled drivers build config 00:02:06.178 mempool/dpaa2: not in enabled drivers build config 00:02:06.178 mempool/octeontx: not in enabled drivers build config 00:02:06.178 mempool/stack: not in enabled drivers build config 00:02:06.178 dma/cnxk: not in enabled drivers build config 00:02:06.178 dma/dpaa: not in enabled drivers build config 00:02:06.178 dma/dpaa2: not in enabled drivers build config 00:02:06.178 dma/hisilicon: not in enabled drivers build config 00:02:06.178 dma/idxd: not in enabled drivers build config 00:02:06.178 dma/ioat: not in enabled drivers build config 00:02:06.178 dma/skeleton: not in enabled drivers build config 00:02:06.178 net/af_packet: not in enabled drivers build config 00:02:06.178 net/af_xdp: not in enabled drivers build config 00:02:06.178 net/ark: not in enabled drivers build config 00:02:06.178 net/atlantic: not in enabled drivers build config 00:02:06.178 net/avp: not in enabled drivers build config 00:02:06.178 net/axgbe: not in enabled drivers build config 00:02:06.178 net/bnx2x: not in enabled drivers build config 00:02:06.178 net/bnxt: not in enabled drivers build config 00:02:06.178 net/bonding: not in enabled drivers build config 00:02:06.178 net/cnxk: not in enabled drivers build config 00:02:06.178 net/cpfl: not in enabled drivers build config 00:02:06.178 net/cxgbe: not in enabled drivers build config 00:02:06.178 net/dpaa: not in enabled drivers build config 00:02:06.178 net/dpaa2: not in enabled drivers build config 00:02:06.178 net/e1000: not in enabled drivers build config 00:02:06.178 net/ena: not in enabled drivers build config 00:02:06.178 net/enetc: not in enabled drivers build config 00:02:06.178 net/enetfec: not in enabled drivers build config 00:02:06.178 net/enic: not in enabled drivers build config 00:02:06.178 net/failsafe: not in enabled drivers build config 00:02:06.178 net/fm10k: not in enabled drivers build config 00:02:06.178 net/gve: not in enabled drivers build config 00:02:06.178 net/hinic: not in enabled drivers build config 00:02:06.178 net/hns3: not in enabled drivers build config 00:02:06.178 net/i40e: not in enabled drivers build config 00:02:06.178 net/iavf: not in enabled drivers build config 00:02:06.178 net/ice: not in enabled drivers build config 00:02:06.178 net/idpf: not in enabled drivers build config 00:02:06.178 net/igc: not in enabled drivers build config 00:02:06.178 net/ionic: not in enabled drivers build config 00:02:06.178 net/ipn3ke: not in enabled drivers build config 00:02:06.178 net/ixgbe: not in enabled drivers build config 00:02:06.178 net/mana: not in enabled drivers build config 00:02:06.178 net/memif: not in enabled drivers build config 00:02:06.178 net/mlx4: not in enabled drivers build config 00:02:06.178 net/mlx5: not in enabled drivers build config 00:02:06.178 net/mvneta: not in enabled drivers build config 00:02:06.178 net/mvpp2: not in enabled drivers build config 00:02:06.178 net/netvsc: not in enabled drivers build config 00:02:06.178 net/nfb: not in enabled drivers build config 00:02:06.178 net/nfp: not in enabled drivers build config 00:02:06.178 net/ngbe: not in enabled drivers build config 00:02:06.178 net/null: not in enabled drivers build config 00:02:06.178 net/octeontx: not in enabled drivers build config 00:02:06.178 net/octeon_ep: not in enabled drivers build config 00:02:06.178 net/pcap: not in enabled drivers build config 00:02:06.178 net/pfe: not in enabled drivers build config 00:02:06.178 net/qede: not in enabled drivers build config 00:02:06.178 net/ring: not in enabled drivers build config 00:02:06.178 net/sfc: not in enabled drivers build config 00:02:06.178 net/softnic: not in enabled drivers build config 00:02:06.178 net/tap: not in enabled drivers build config 00:02:06.178 net/thunderx: not in enabled drivers build config 00:02:06.178 net/txgbe: not in enabled drivers build config 00:02:06.178 net/vdev_netvsc: not in enabled drivers build config 00:02:06.178 net/vhost: not in enabled drivers build config 00:02:06.178 net/virtio: not in enabled drivers build config 00:02:06.178 net/vmxnet3: not in enabled drivers build config 00:02:06.178 raw/*: missing internal dependency, "rawdev" 00:02:06.178 crypto/armv8: not in enabled drivers build config 00:02:06.178 crypto/bcmfs: not in enabled drivers build config 00:02:06.178 crypto/caam_jr: not in enabled drivers build config 00:02:06.178 crypto/ccp: not in enabled drivers build config 00:02:06.178 crypto/cnxk: not in enabled drivers build config 00:02:06.178 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.178 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.178 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.178 crypto/mlx5: not in enabled drivers build config 00:02:06.178 crypto/mvsam: not in enabled drivers build config 00:02:06.178 crypto/nitrox: not in enabled drivers build config 00:02:06.178 crypto/null: not in enabled drivers build config 00:02:06.178 crypto/octeontx: not in enabled drivers build config 00:02:06.178 crypto/openssl: not in enabled drivers build config 00:02:06.178 crypto/scheduler: not in enabled drivers build config 00:02:06.178 crypto/uadk: not in enabled drivers build config 00:02:06.178 crypto/virtio: not in enabled drivers build config 00:02:06.178 compress/isal: not in enabled drivers build config 00:02:06.178 compress/mlx5: not in enabled drivers build config 00:02:06.178 compress/octeontx: not in enabled drivers build config 00:02:06.178 compress/zlib: not in enabled drivers build config 00:02:06.178 regex/*: missing internal dependency, "regexdev" 00:02:06.178 ml/*: missing internal dependency, "mldev" 00:02:06.178 vdpa/ifc: not in enabled drivers build config 00:02:06.178 vdpa/mlx5: not in enabled drivers build config 00:02:06.178 vdpa/nfp: not in enabled drivers build config 00:02:06.178 vdpa/sfc: not in enabled drivers build config 00:02:06.178 event/*: missing internal dependency, "eventdev" 00:02:06.178 baseband/*: missing internal dependency, "bbdev" 00:02:06.178 gpu/*: missing internal dependency, "gpudev" 00:02:06.178 00:02:06.178 00:02:06.178 Build targets in project: 85 00:02:06.178 00:02:06.178 DPDK 23.11.0 00:02:06.178 00:02:06.178 User defined options 00:02:06.178 buildtype : debug 00:02:06.178 default_library : shared 00:02:06.178 libdir : lib 00:02:06.178 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:06.178 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:06.178 c_link_args : 00:02:06.178 cpu_instruction_set: native 00:02:06.179 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:06.179 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:06.179 enable_docs : false 00:02:06.179 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:06.179 enable_kmods : false 00:02:06.179 tests : false 00:02:06.179 00:02:06.179 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.449 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:06.449 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.449 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.449 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:06.718 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.718 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.718 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:06.718 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.718 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:06.718 [9/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.718 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:06.718 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.718 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.718 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.718 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.718 [15/265] Linking static target lib/librte_kvargs.a 00:02:06.718 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.718 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.718 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.718 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.718 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.718 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.718 [22/265] Linking static target lib/librte_log.a 00:02:06.718 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.718 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.718 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.718 [26/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.718 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.718 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.718 [29/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:06.718 [30/265] Linking static target lib/librte_pci.a 00:02:06.718 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.718 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.718 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.718 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.718 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.718 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.977 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.977 [38/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.977 [39/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.977 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.977 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:06.977 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.977 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.977 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:06.977 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:06.977 [46/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:06.977 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.977 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.237 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.237 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.237 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:07.237 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:07.237 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:07.237 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.237 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:07.237 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.237 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.237 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.237 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.237 [60/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.237 [61/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.237 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:07.237 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:07.237 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:07.237 [65/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.237 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.237 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:07.237 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:07.237 [69/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.237 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:07.237 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.237 [72/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.237 [73/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.237 [74/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.237 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.237 [76/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.237 [77/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.237 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.237 [79/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.237 [80/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.237 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:07.237 [82/265] Linking static target lib/librte_ring.a 00:02:07.237 [83/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.237 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.237 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.237 [86/265] Linking static target lib/librte_meter.a 00:02:07.237 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:07.238 [88/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.238 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.238 [90/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.238 [91/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.238 [92/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:07.238 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.238 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.238 [95/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.238 [96/265] Linking static target lib/librte_telemetry.a 00:02:07.238 [97/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.238 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.238 [99/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.238 [100/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.238 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.238 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.238 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.238 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.238 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.238 [106/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.238 [107/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.238 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.238 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:07.238 [110/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.238 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.238 [112/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.238 [113/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.238 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.238 [115/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.238 [116/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.238 [117/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.238 [118/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.238 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.238 [120/265] Linking static target lib/librte_cmdline.a 00:02:07.238 [121/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.238 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:07.238 [123/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.238 [124/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.238 [125/265] Linking static target lib/librte_timer.a 00:02:07.238 [126/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.238 [127/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.238 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.238 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.238 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.238 [131/265] Linking static target lib/librte_net.a 00:02:07.238 [132/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.238 [133/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.238 [134/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.238 [135/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.238 [136/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.238 [137/265] Linking static target lib/librte_compressdev.a 00:02:07.238 [138/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.238 [139/265] Linking static target lib/librte_mempool.a 00:02:07.238 [140/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.238 [141/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.238 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.238 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:07.238 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.238 [145/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.238 [146/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.238 [147/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.238 [148/265] Linking static target lib/librte_dmadev.a 00:02:07.238 [149/265] Linking static target lib/librte_rcu.a 00:02:07.498 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.498 [151/265] Linking static target lib/librte_eal.a 00:02:07.498 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.498 [153/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.498 [154/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.498 [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.498 [156/265] Linking static target lib/librte_reorder.a 00:02:07.498 [157/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.498 [158/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.498 [159/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.498 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.498 [161/265] Linking static target lib/librte_mbuf.a 00:02:07.498 [162/265] Linking target lib/librte_log.so.24.0 00:02:07.498 [163/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.498 [164/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:07.498 [165/265] Linking static target lib/librte_power.a 00:02:07.498 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.498 [167/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.498 [168/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.498 [169/265] Linking static target lib/librte_security.a 00:02:07.498 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.498 [171/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.498 [172/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.498 [173/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.498 [174/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.498 [175/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.498 [176/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.498 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:07.498 [178/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.498 [179/265] Linking static target lib/librte_hash.a 00:02:07.498 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:07.498 [181/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:07.498 [182/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:07.498 [183/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:07.498 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.757 [185/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.757 [186/265] Linking target lib/librte_kvargs.so.24.0 00:02:07.757 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.757 [188/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.757 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:07.757 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:07.757 [191/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.757 [192/265] Linking static target lib/librte_cryptodev.a 00:02:07.757 [193/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.757 [194/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.757 [195/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.757 [196/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.757 [197/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:07.757 [198/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.757 [199/265] Linking static target drivers/librte_bus_vdev.a 00:02:07.757 [200/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:07.757 [201/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.757 [202/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.757 [203/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.757 [204/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.757 [205/265] Linking target lib/librte_telemetry.so.24.0 00:02:07.757 [206/265] Linking static target drivers/librte_mempool_ring.a 00:02:07.757 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.757 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.757 [209/265] Linking static target drivers/librte_bus_pci.a 00:02:07.757 [210/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.015 [211/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.016 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:08.016 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.016 [214/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.275 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.275 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.275 [217/265] Linking static target lib/librte_ethdev.a 00:02:08.275 [218/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.275 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.275 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.534 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.534 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.534 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.792 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.362 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.362 [226/265] Linking static target lib/librte_vhost.a 00:02:09.931 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.949 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.239 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.530 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.530 [231/265] Linking target lib/librte_eal.so.24.0 00:02:20.530 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:20.530 [233/265] Linking target lib/librte_pci.so.24.0 00:02:20.530 [234/265] Linking target lib/librte_dmadev.so.24.0 00:02:20.530 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:20.530 [236/265] Linking target lib/librte_ring.so.24.0 00:02:20.530 [237/265] Linking target lib/librte_meter.so.24.0 00:02:20.530 [238/265] Linking target lib/librte_timer.so.24.0 00:02:20.530 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:20.530 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:20.530 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:20.530 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:20.530 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:20.530 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:20.530 [245/265] Linking target lib/librte_rcu.so.24.0 00:02:20.530 [246/265] Linking target lib/librte_mempool.so.24.0 00:02:20.530 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:20.530 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:20.789 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:20.789 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:20.789 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:20.789 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:20.789 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:20.789 [254/265] Linking target lib/librte_net.so.24.0 00:02:20.789 [255/265] Linking target lib/librte_compressdev.so.24.0 00:02:21.048 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:21.048 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:21.048 [258/265] Linking target lib/librte_hash.so.24.0 00:02:21.048 [259/265] Linking target lib/librte_security.so.24.0 00:02:21.048 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:21.048 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:21.048 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.048 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:21.308 [264/265] Linking target lib/librte_power.so.24.0 00:02:21.308 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:21.308 INFO: autodetecting backend as ninja 00:02:21.308 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:22.245 CC lib/ut/ut.o 00:02:22.245 CC lib/log/log_deprecated.o 00:02:22.245 CC lib/log/log.o 00:02:22.245 CC lib/ut_mock/mock.o 00:02:22.246 CC lib/log/log_flags.o 00:02:22.246 LIB libspdk_ut.a 00:02:22.246 LIB libspdk_ut_mock.a 00:02:22.246 SO libspdk_ut.so.1.0 00:02:22.246 LIB libspdk_log.a 00:02:22.246 SO libspdk_ut_mock.so.5.0 00:02:22.246 SYMLINK libspdk_ut.so 00:02:22.246 SO libspdk_log.so.6.1 00:02:22.505 SYMLINK libspdk_ut_mock.so 00:02:22.505 SYMLINK libspdk_log.so 00:02:22.764 CXX lib/trace_parser/trace.o 00:02:22.764 CC lib/ioat/ioat.o 00:02:22.764 CC lib/util/base64.o 00:02:22.764 CC lib/util/bit_array.o 00:02:22.764 CC lib/util/cpuset.o 00:02:22.764 CC lib/util/crc16.o 00:02:22.764 CC lib/util/crc32.o 00:02:22.764 CC lib/util/crc32c.o 00:02:22.764 CC lib/util/crc32_ieee.o 00:02:22.764 CC lib/util/crc64.o 00:02:22.764 CC lib/util/file.o 00:02:22.764 CC lib/util/dif.o 00:02:22.764 CC lib/dma/dma.o 00:02:22.764 CC lib/util/fd.o 00:02:22.764 CC lib/util/hexlify.o 00:02:22.764 CC lib/util/iov.o 00:02:22.764 CC lib/util/math.o 00:02:22.764 CC lib/util/pipe.o 00:02:22.764 CC lib/util/strerror_tls.o 00:02:22.764 CC lib/util/string.o 00:02:22.764 CC lib/util/uuid.o 00:02:22.764 CC lib/util/fd_group.o 00:02:22.764 CC lib/util/xor.o 00:02:22.764 CC lib/util/zipf.o 00:02:22.764 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.764 CC lib/vfio_user/host/vfio_user.o 00:02:22.764 LIB libspdk_dma.a 00:02:22.764 LIB libspdk_ioat.a 00:02:22.764 SO libspdk_dma.so.3.0 00:02:23.023 SO libspdk_ioat.so.6.0 00:02:23.023 SYMLINK libspdk_dma.so 00:02:23.023 SYMLINK libspdk_ioat.so 00:02:23.023 LIB libspdk_vfio_user.a 00:02:23.023 SO libspdk_vfio_user.so.4.0 00:02:23.023 LIB libspdk_util.a 00:02:23.023 SYMLINK libspdk_vfio_user.so 00:02:23.282 SO libspdk_util.so.8.0 00:02:23.282 SYMLINK libspdk_util.so 00:02:23.282 LIB libspdk_trace_parser.a 00:02:23.282 SO libspdk_trace_parser.so.4.0 00:02:23.540 SYMLINK libspdk_trace_parser.so 00:02:23.540 CC lib/env_dpdk/env.o 00:02:23.540 CC lib/env_dpdk/memory.o 00:02:23.540 CC lib/env_dpdk/pci.o 00:02:23.540 CC lib/env_dpdk/init.o 00:02:23.540 CC lib/env_dpdk/threads.o 00:02:23.540 CC lib/env_dpdk/pci_ioat.o 00:02:23.540 CC lib/env_dpdk/pci_virtio.o 00:02:23.540 CC lib/env_dpdk/pci_vmd.o 00:02:23.540 CC lib/env_dpdk/sigbus_handler.o 00:02:23.540 CC lib/env_dpdk/pci_idxd.o 00:02:23.540 CC lib/env_dpdk/pci_event.o 00:02:23.540 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.540 CC lib/env_dpdk/pci_dpdk.o 00:02:23.540 CC lib/rdma/common.o 00:02:23.540 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.540 CC lib/rdma/rdma_verbs.o 00:02:23.540 CC lib/vmd/vmd.o 00:02:23.540 CC lib/vmd/led.o 00:02:23.540 CC lib/conf/conf.o 00:02:23.540 CC lib/idxd/idxd.o 00:02:23.540 CC lib/json/json_parse.o 00:02:23.540 CC lib/json/json_util.o 00:02:23.540 CC lib/json/json_write.o 00:02:23.540 CC lib/idxd/idxd_user.o 00:02:23.540 CC lib/idxd/idxd_kernel.o 00:02:23.798 LIB libspdk_conf.a 00:02:23.798 LIB libspdk_rdma.a 00:02:23.798 LIB libspdk_json.a 00:02:23.798 SO libspdk_conf.so.5.0 00:02:23.798 SO libspdk_rdma.so.5.0 00:02:23.798 SO libspdk_json.so.5.1 00:02:23.798 SYMLINK libspdk_rdma.so 00:02:23.798 SYMLINK libspdk_conf.so 00:02:23.798 SYMLINK libspdk_json.so 00:02:24.056 LIB libspdk_idxd.a 00:02:24.056 SO libspdk_idxd.so.11.0 00:02:24.056 LIB libspdk_vmd.a 00:02:24.056 SO libspdk_vmd.so.5.0 00:02:24.056 SYMLINK libspdk_idxd.so 00:02:24.056 SYMLINK libspdk_vmd.so 00:02:24.056 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.056 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:24.056 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:24.056 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.318 LIB libspdk_jsonrpc.a 00:02:24.318 SO libspdk_jsonrpc.so.5.1 00:02:24.318 SYMLINK libspdk_jsonrpc.so 00:02:24.576 LIB libspdk_env_dpdk.a 00:02:24.577 SO libspdk_env_dpdk.so.13.0 00:02:24.577 CC lib/rpc/rpc.o 00:02:24.577 SYMLINK libspdk_env_dpdk.so 00:02:24.835 LIB libspdk_rpc.a 00:02:24.835 SO libspdk_rpc.so.5.0 00:02:24.835 SYMLINK libspdk_rpc.so 00:02:25.095 CC lib/sock/sock.o 00:02:25.095 CC lib/sock/sock_rpc.o 00:02:25.095 CC lib/trace/trace_flags.o 00:02:25.095 CC lib/trace/trace.o 00:02:25.095 CC lib/trace/trace_rpc.o 00:02:25.095 CC lib/notify/notify.o 00:02:25.095 CC lib/notify/notify_rpc.o 00:02:25.353 LIB libspdk_notify.a 00:02:25.353 LIB libspdk_trace.a 00:02:25.353 SO libspdk_notify.so.5.0 00:02:25.353 SO libspdk_trace.so.9.0 00:02:25.353 SYMLINK libspdk_notify.so 00:02:25.353 LIB libspdk_sock.a 00:02:25.353 SO libspdk_sock.so.8.0 00:02:25.353 SYMLINK libspdk_trace.so 00:02:25.612 SYMLINK libspdk_sock.so 00:02:25.612 CC lib/thread/thread.o 00:02:25.612 CC lib/thread/iobuf.o 00:02:25.872 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.872 CC lib/nvme/nvme_ctrlr.o 00:02:25.872 CC lib/nvme/nvme_fabric.o 00:02:25.872 CC lib/nvme/nvme_ns_cmd.o 00:02:25.872 CC lib/nvme/nvme_ns.o 00:02:25.872 CC lib/nvme/nvme_pcie_common.o 00:02:25.872 CC lib/nvme/nvme.o 00:02:25.872 CC lib/nvme/nvme_pcie.o 00:02:25.872 CC lib/nvme/nvme_qpair.o 00:02:25.872 CC lib/nvme/nvme_discovery.o 00:02:25.872 CC lib/nvme/nvme_quirks.o 00:02:25.872 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.872 CC lib/nvme/nvme_transport.o 00:02:25.872 CC lib/nvme/nvme_tcp.o 00:02:25.872 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.872 CC lib/nvme/nvme_opal.o 00:02:25.872 CC lib/nvme/nvme_io_msg.o 00:02:25.872 CC lib/nvme/nvme_poll_group.o 00:02:25.872 CC lib/nvme/nvme_zns.o 00:02:25.872 CC lib/nvme/nvme_cuse.o 00:02:25.872 CC lib/nvme/nvme_vfio_user.o 00:02:25.872 CC lib/nvme/nvme_rdma.o 00:02:26.810 LIB libspdk_thread.a 00:02:26.810 SO libspdk_thread.so.9.0 00:02:26.810 SYMLINK libspdk_thread.so 00:02:27.070 CC lib/blob/blobstore.o 00:02:27.070 CC lib/blob/zeroes.o 00:02:27.070 CC lib/blob/request.o 00:02:27.070 CC lib/blob/blob_bs_dev.o 00:02:27.070 CC lib/virtio/virtio.o 00:02:27.070 CC lib/virtio/virtio_pci.o 00:02:27.070 CC lib/virtio/virtio_vhost_user.o 00:02:27.070 CC lib/virtio/virtio_vfio_user.o 00:02:27.070 CC lib/init/subsystem_rpc.o 00:02:27.070 CC lib/init/json_config.o 00:02:27.070 CC lib/init/subsystem.o 00:02:27.070 CC lib/init/rpc.o 00:02:27.070 CC lib/accel/accel.o 00:02:27.070 CC lib/accel/accel_rpc.o 00:02:27.070 CC lib/accel/accel_sw.o 00:02:27.328 LIB libspdk_nvme.a 00:02:27.328 LIB libspdk_init.a 00:02:27.328 SO libspdk_nvme.so.12.0 00:02:27.328 LIB libspdk_virtio.a 00:02:27.329 SO libspdk_init.so.4.0 00:02:27.329 SO libspdk_virtio.so.6.0 00:02:27.329 SYMLINK libspdk_init.so 00:02:27.587 SYMLINK libspdk_virtio.so 00:02:27.587 SYMLINK libspdk_nvme.so 00:02:27.587 CC lib/event/log_rpc.o 00:02:27.587 CC lib/event/app.o 00:02:27.587 CC lib/event/scheduler_static.o 00:02:27.587 CC lib/event/reactor.o 00:02:27.587 CC lib/event/app_rpc.o 00:02:27.845 LIB libspdk_accel.a 00:02:27.845 SO libspdk_accel.so.14.0 00:02:27.845 SYMLINK libspdk_accel.so 00:02:27.845 LIB libspdk_event.a 00:02:28.104 SO libspdk_event.so.12.0 00:02:28.104 SYMLINK libspdk_event.so 00:02:28.104 CC lib/bdev/bdev.o 00:02:28.104 CC lib/bdev/part.o 00:02:28.104 CC lib/bdev/bdev_rpc.o 00:02:28.104 CC lib/bdev/scsi_nvme.o 00:02:28.104 CC lib/bdev/bdev_zone.o 00:02:29.042 LIB libspdk_blob.a 00:02:29.042 SO libspdk_blob.so.10.1 00:02:29.042 SYMLINK libspdk_blob.so 00:02:29.301 CC lib/lvol/lvol.o 00:02:29.301 CC lib/blobfs/blobfs.o 00:02:29.301 CC lib/blobfs/tree.o 00:02:29.869 LIB libspdk_lvol.a 00:02:29.869 LIB libspdk_blobfs.a 00:02:29.869 SO libspdk_lvol.so.9.1 00:02:29.869 LIB libspdk_bdev.a 00:02:29.869 SO libspdk_blobfs.so.9.0 00:02:29.869 SO libspdk_bdev.so.14.0 00:02:29.869 SYMLINK libspdk_lvol.so 00:02:30.129 SYMLINK libspdk_blobfs.so 00:02:30.129 SYMLINK libspdk_bdev.so 00:02:30.387 CC lib/ublk/ublk.o 00:02:30.387 CC lib/ublk/ublk_rpc.o 00:02:30.387 CC lib/nbd/nbd.o 00:02:30.387 CC lib/nbd/nbd_rpc.o 00:02:30.387 CC lib/nvmf/ctrlr.o 00:02:30.387 CC lib/nvmf/ctrlr_discovery.o 00:02:30.387 CC lib/ftl/ftl_init.o 00:02:30.387 CC lib/nvmf/ctrlr_bdev.o 00:02:30.387 CC lib/ftl/ftl_core.o 00:02:30.387 CC lib/ftl/ftl_debug.o 00:02:30.387 CC lib/nvmf/subsystem.o 00:02:30.387 CC lib/scsi/dev.o 00:02:30.387 CC lib/nvmf/nvmf.o 00:02:30.387 CC lib/nvmf/tcp.o 00:02:30.387 CC lib/ftl/ftl_layout.o 00:02:30.387 CC lib/nvmf/nvmf_rpc.o 00:02:30.387 CC lib/nvmf/transport.o 00:02:30.387 CC lib/ftl/ftl_io.o 00:02:30.387 CC lib/scsi/lun.o 00:02:30.387 CC lib/scsi/port.o 00:02:30.387 CC lib/ftl/ftl_sb.o 00:02:30.387 CC lib/nvmf/rdma.o 00:02:30.387 CC lib/scsi/scsi.o 00:02:30.387 CC lib/ftl/ftl_l2p.o 00:02:30.387 CC lib/scsi/scsi_bdev.o 00:02:30.387 CC lib/ftl/ftl_l2p_flat.o 00:02:30.387 CC lib/scsi/scsi_pr.o 00:02:30.387 CC lib/ftl/ftl_nv_cache.o 00:02:30.387 CC lib/scsi/scsi_rpc.o 00:02:30.387 CC lib/scsi/task.o 00:02:30.387 CC lib/ftl/ftl_band.o 00:02:30.387 CC lib/ftl/ftl_rq.o 00:02:30.387 CC lib/ftl/ftl_band_ops.o 00:02:30.387 CC lib/ftl/ftl_writer.o 00:02:30.387 CC lib/ftl/ftl_l2p_cache.o 00:02:30.387 CC lib/ftl/ftl_reloc.o 00:02:30.387 CC lib/ftl/ftl_p2l.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:30.387 CC lib/ftl/utils/ftl_conf.o 00:02:30.387 CC lib/ftl/utils/ftl_md.o 00:02:30.387 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:30.387 CC lib/ftl/utils/ftl_mempool.o 00:02:30.387 CC lib/ftl/utils/ftl_bitmap.o 00:02:30.387 CC lib/ftl/utils/ftl_property.o 00:02:30.387 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:30.387 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:30.387 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:30.387 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:30.387 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:30.387 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:30.387 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:30.387 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:30.387 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:30.387 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:30.387 CC lib/ftl/base/ftl_base_dev.o 00:02:30.387 CC lib/ftl/ftl_trace.o 00:02:30.387 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.647 LIB libspdk_nbd.a 00:02:30.647 SO libspdk_nbd.so.6.0 00:02:30.906 SYMLINK libspdk_nbd.so 00:02:30.906 LIB libspdk_scsi.a 00:02:30.906 SO libspdk_scsi.so.8.0 00:02:30.906 LIB libspdk_ublk.a 00:02:30.906 SO libspdk_ublk.so.2.0 00:02:30.906 SYMLINK libspdk_scsi.so 00:02:30.906 SYMLINK libspdk_ublk.so 00:02:31.164 LIB libspdk_ftl.a 00:02:31.164 CC lib/vhost/vhost.o 00:02:31.164 CC lib/iscsi/init_grp.o 00:02:31.164 CC lib/vhost/vhost_rpc.o 00:02:31.164 CC lib/vhost/rte_vhost_user.o 00:02:31.164 CC lib/iscsi/iscsi.o 00:02:31.164 CC lib/iscsi/conn.o 00:02:31.164 CC lib/vhost/vhost_scsi.o 00:02:31.164 CC lib/vhost/vhost_blk.o 00:02:31.164 CC lib/iscsi/md5.o 00:02:31.164 CC lib/iscsi/tgt_node.o 00:02:31.164 CC lib/iscsi/param.o 00:02:31.164 CC lib/iscsi/portal_grp.o 00:02:31.164 CC lib/iscsi/iscsi_subsystem.o 00:02:31.164 SO libspdk_ftl.so.8.0 00:02:31.164 CC lib/iscsi/iscsi_rpc.o 00:02:31.164 CC lib/iscsi/task.o 00:02:31.422 SYMLINK libspdk_ftl.so 00:02:31.989 LIB libspdk_nvmf.a 00:02:31.990 SO libspdk_nvmf.so.17.0 00:02:31.990 LIB libspdk_vhost.a 00:02:31.990 SO libspdk_vhost.so.7.1 00:02:31.990 SYMLINK libspdk_nvmf.so 00:02:32.249 SYMLINK libspdk_vhost.so 00:02:32.249 LIB libspdk_iscsi.a 00:02:32.249 SO libspdk_iscsi.so.7.0 00:02:32.507 SYMLINK libspdk_iscsi.so 00:02:32.766 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.766 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.766 CC module/sock/posix/posix.o 00:02:32.766 CC module/accel/ioat/accel_ioat.o 00:02:32.766 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.766 CC module/accel/dsa/accel_dsa.o 00:02:32.766 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.766 CC module/accel/error/accel_error.o 00:02:32.766 CC module/accel/error/accel_error_rpc.o 00:02:32.766 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:32.766 CC module/scheduler/gscheduler/gscheduler.o 00:02:32.766 CC module/accel/iaa/accel_iaa.o 00:02:32.766 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.766 CC module/blob/bdev/blob_bdev.o 00:02:32.766 LIB libspdk_env_dpdk_rpc.a 00:02:33.024 SO libspdk_env_dpdk_rpc.so.5.0 00:02:33.024 SYMLINK libspdk_env_dpdk_rpc.so 00:02:33.024 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.024 LIB libspdk_scheduler_gscheduler.a 00:02:33.024 LIB libspdk_accel_error.a 00:02:33.024 LIB libspdk_accel_ioat.a 00:02:33.024 LIB libspdk_scheduler_dynamic.a 00:02:33.024 SO libspdk_scheduler_gscheduler.so.3.0 00:02:33.024 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:33.024 SO libspdk_accel_error.so.1.0 00:02:33.024 SO libspdk_accel_ioat.so.5.0 00:02:33.024 LIB libspdk_accel_iaa.a 00:02:33.025 SO libspdk_scheduler_dynamic.so.3.0 00:02:33.025 LIB libspdk_accel_dsa.a 00:02:33.025 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.025 LIB libspdk_blob_bdev.a 00:02:33.025 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.025 SO libspdk_accel_iaa.so.2.0 00:02:33.025 SO libspdk_accel_dsa.so.4.0 00:02:33.025 SYMLINK libspdk_accel_ioat.so 00:02:33.025 SYMLINK libspdk_accel_error.so 00:02:33.025 SO libspdk_blob_bdev.so.10.1 00:02:33.025 SYMLINK libspdk_scheduler_dynamic.so 00:02:33.284 SYMLINK libspdk_accel_dsa.so 00:02:33.284 SYMLINK libspdk_accel_iaa.so 00:02:33.284 SYMLINK libspdk_blob_bdev.so 00:02:33.284 LIB libspdk_sock_posix.a 00:02:33.543 SO libspdk_sock_posix.so.5.0 00:02:33.543 CC module/bdev/delay/vbdev_delay.o 00:02:33.543 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.543 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.543 CC module/bdev/raid/bdev_raid.o 00:02:33.543 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.543 CC module/bdev/raid/raid0.o 00:02:33.543 CC module/bdev/raid/raid1.o 00:02:33.543 CC module/bdev/error/vbdev_error.o 00:02:33.543 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.543 CC module/bdev/raid/concat.o 00:02:33.543 CC module/bdev/gpt/gpt.o 00:02:33.543 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.543 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.543 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.543 SYMLINK libspdk_sock_posix.so 00:02:33.543 CC module/bdev/aio/bdev_aio.o 00:02:33.543 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.543 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.543 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.543 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.543 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.543 CC module/bdev/nvme/bdev_nvme.o 00:02:33.543 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.543 CC module/bdev/nvme/nvme_rpc.o 00:02:33.543 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.543 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.543 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.543 CC module/bdev/nvme/vbdev_opal.o 00:02:33.543 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.543 CC module/bdev/malloc/bdev_malloc.o 00:02:33.543 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.543 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.543 CC module/bdev/ftl/bdev_ftl.o 00:02:33.543 CC module/bdev/null/bdev_null.o 00:02:33.543 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.543 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.543 CC module/bdev/null/bdev_null_rpc.o 00:02:33.543 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.543 CC module/bdev/split/vbdev_split.o 00:02:33.543 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.543 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.543 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.543 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.801 LIB libspdk_blobfs_bdev.a 00:02:33.801 SO libspdk_blobfs_bdev.so.5.0 00:02:33.801 LIB libspdk_bdev_split.a 00:02:33.801 LIB libspdk_bdev_gpt.a 00:02:33.801 LIB libspdk_bdev_null.a 00:02:33.801 LIB libspdk_bdev_error.a 00:02:33.801 SO libspdk_bdev_gpt.so.5.0 00:02:33.801 LIB libspdk_bdev_passthru.a 00:02:33.801 SO libspdk_bdev_split.so.5.0 00:02:33.801 LIB libspdk_bdev_ftl.a 00:02:33.801 SO libspdk_bdev_null.so.5.0 00:02:33.801 LIB libspdk_bdev_aio.a 00:02:33.801 SYMLINK libspdk_blobfs_bdev.so 00:02:33.801 SO libspdk_bdev_error.so.5.0 00:02:33.801 SO libspdk_bdev_passthru.so.5.0 00:02:33.801 LIB libspdk_bdev_zone_block.a 00:02:33.801 LIB libspdk_bdev_delay.a 00:02:33.801 LIB libspdk_bdev_iscsi.a 00:02:33.801 SO libspdk_bdev_ftl.so.5.0 00:02:33.801 LIB libspdk_bdev_malloc.a 00:02:33.801 SO libspdk_bdev_aio.so.5.0 00:02:33.801 SYMLINK libspdk_bdev_gpt.so 00:02:33.801 SO libspdk_bdev_delay.so.5.0 00:02:33.801 SO libspdk_bdev_zone_block.so.5.0 00:02:33.801 SYMLINK libspdk_bdev_split.so 00:02:33.801 SYMLINK libspdk_bdev_error.so 00:02:33.801 SYMLINK libspdk_bdev_null.so 00:02:33.801 SO libspdk_bdev_iscsi.so.5.0 00:02:34.060 SO libspdk_bdev_malloc.so.5.0 00:02:34.060 SYMLINK libspdk_bdev_passthru.so 00:02:34.060 SYMLINK libspdk_bdev_ftl.so 00:02:34.060 SYMLINK libspdk_bdev_delay.so 00:02:34.060 SYMLINK libspdk_bdev_aio.so 00:02:34.060 LIB libspdk_bdev_lvol.a 00:02:34.060 SYMLINK libspdk_bdev_zone_block.so 00:02:34.060 SYMLINK libspdk_bdev_iscsi.so 00:02:34.060 SYMLINK libspdk_bdev_malloc.so 00:02:34.060 SO libspdk_bdev_lvol.so.5.0 00:02:34.060 LIB libspdk_bdev_virtio.a 00:02:34.060 SO libspdk_bdev_virtio.so.5.0 00:02:34.060 SYMLINK libspdk_bdev_lvol.so 00:02:34.060 SYMLINK libspdk_bdev_virtio.so 00:02:34.318 LIB libspdk_bdev_raid.a 00:02:34.318 SO libspdk_bdev_raid.so.5.0 00:02:34.318 SYMLINK libspdk_bdev_raid.so 00:02:35.253 LIB libspdk_bdev_nvme.a 00:02:35.253 SO libspdk_bdev_nvme.so.6.0 00:02:35.253 SYMLINK libspdk_bdev_nvme.so 00:02:35.822 CC module/event/subsystems/sock/sock.o 00:02:35.822 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:35.822 CC module/event/subsystems/scheduler/scheduler.o 00:02:35.822 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:35.822 CC module/event/subsystems/vmd/vmd.o 00:02:35.822 CC module/event/subsystems/iobuf/iobuf.o 00:02:35.822 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:35.822 LIB libspdk_event_sock.a 00:02:35.822 LIB libspdk_event_vhost_blk.a 00:02:35.822 SO libspdk_event_sock.so.4.0 00:02:35.822 LIB libspdk_event_vmd.a 00:02:35.822 LIB libspdk_event_scheduler.a 00:02:35.822 LIB libspdk_event_iobuf.a 00:02:35.822 SO libspdk_event_vmd.so.5.0 00:02:35.822 SO libspdk_event_vhost_blk.so.2.0 00:02:35.822 SO libspdk_event_scheduler.so.3.0 00:02:36.081 SO libspdk_event_iobuf.so.2.0 00:02:36.081 SYMLINK libspdk_event_sock.so 00:02:36.081 SYMLINK libspdk_event_vhost_blk.so 00:02:36.081 SYMLINK libspdk_event_vmd.so 00:02:36.081 SYMLINK libspdk_event_scheduler.so 00:02:36.081 SYMLINK libspdk_event_iobuf.so 00:02:36.340 CC module/event/subsystems/accel/accel.o 00:02:36.340 LIB libspdk_event_accel.a 00:02:36.340 SO libspdk_event_accel.so.5.0 00:02:36.340 SYMLINK libspdk_event_accel.so 00:02:36.599 CC module/event/subsystems/bdev/bdev.o 00:02:36.858 LIB libspdk_event_bdev.a 00:02:36.858 SO libspdk_event_bdev.so.5.0 00:02:36.858 SYMLINK libspdk_event_bdev.so 00:02:37.117 CC module/event/subsystems/ublk/ublk.o 00:02:37.117 CC module/event/subsystems/scsi/scsi.o 00:02:37.117 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.117 CC module/event/subsystems/nbd/nbd.o 00:02:37.117 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.117 LIB libspdk_event_ublk.a 00:02:37.376 LIB libspdk_event_nbd.a 00:02:37.376 LIB libspdk_event_scsi.a 00:02:37.376 SO libspdk_event_ublk.so.2.0 00:02:37.376 SO libspdk_event_scsi.so.5.0 00:02:37.376 SO libspdk_event_nbd.so.5.0 00:02:37.376 LIB libspdk_event_nvmf.a 00:02:37.376 SYMLINK libspdk_event_ublk.so 00:02:37.376 SYMLINK libspdk_event_scsi.so 00:02:37.376 SO libspdk_event_nvmf.so.5.0 00:02:37.376 SYMLINK libspdk_event_nbd.so 00:02:37.376 SYMLINK libspdk_event_nvmf.so 00:02:37.635 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:37.635 CC module/event/subsystems/iscsi/iscsi.o 00:02:37.635 LIB libspdk_event_vhost_scsi.a 00:02:37.635 LIB libspdk_event_iscsi.a 00:02:37.635 SO libspdk_event_vhost_scsi.so.2.0 00:02:37.894 SO libspdk_event_iscsi.so.5.0 00:02:37.894 SYMLINK libspdk_event_vhost_scsi.so 00:02:37.894 SYMLINK libspdk_event_iscsi.so 00:02:37.894 SO libspdk.so.5.0 00:02:37.894 SYMLINK libspdk.so 00:02:38.154 CC app/spdk_lspci/spdk_lspci.o 00:02:38.154 CC test/rpc_client/rpc_client_test.o 00:02:38.154 CC app/spdk_nvme_identify/identify.o 00:02:38.154 TEST_HEADER include/spdk/accel_module.h 00:02:38.154 CC app/trace_record/trace_record.o 00:02:38.154 TEST_HEADER include/spdk/accel.h 00:02:38.154 CXX app/trace/trace.o 00:02:38.154 TEST_HEADER include/spdk/barrier.h 00:02:38.154 TEST_HEADER include/spdk/assert.h 00:02:38.154 TEST_HEADER include/spdk/base64.h 00:02:38.154 TEST_HEADER include/spdk/bdev.h 00:02:38.154 TEST_HEADER include/spdk/bdev_zone.h 00:02:38.154 CC app/spdk_top/spdk_top.o 00:02:38.154 TEST_HEADER include/spdk/bdev_module.h 00:02:38.154 TEST_HEADER include/spdk/bit_array.h 00:02:38.154 TEST_HEADER include/spdk/blob_bdev.h 00:02:38.154 TEST_HEADER include/spdk/bit_pool.h 00:02:38.154 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.154 TEST_HEADER include/spdk/blobfs.h 00:02:38.154 TEST_HEADER include/spdk/blob.h 00:02:38.154 TEST_HEADER include/spdk/conf.h 00:02:38.154 CC app/spdk_nvme_discover/discovery_aer.o 00:02:38.154 TEST_HEADER include/spdk/config.h 00:02:38.154 TEST_HEADER include/spdk/cpuset.h 00:02:38.154 TEST_HEADER include/spdk/crc16.h 00:02:38.154 CC app/spdk_nvme_perf/perf.o 00:02:38.154 TEST_HEADER include/spdk/crc32.h 00:02:38.154 TEST_HEADER include/spdk/crc64.h 00:02:38.154 TEST_HEADER include/spdk/dif.h 00:02:38.154 TEST_HEADER include/spdk/endian.h 00:02:38.154 TEST_HEADER include/spdk/dma.h 00:02:38.154 TEST_HEADER include/spdk/env.h 00:02:38.154 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.154 TEST_HEADER include/spdk/event.h 00:02:38.154 TEST_HEADER include/spdk/fd_group.h 00:02:38.154 TEST_HEADER include/spdk/fd.h 00:02:38.154 TEST_HEADER include/spdk/file.h 00:02:38.154 TEST_HEADER include/spdk/ftl.h 00:02:38.154 TEST_HEADER include/spdk/hexlify.h 00:02:38.154 TEST_HEADER include/spdk/gpt_spec.h 00:02:38.424 TEST_HEADER include/spdk/histogram_data.h 00:02:38.424 TEST_HEADER include/spdk/idxd.h 00:02:38.424 TEST_HEADER include/spdk/init.h 00:02:38.424 TEST_HEADER include/spdk/idxd_spec.h 00:02:38.424 TEST_HEADER include/spdk/ioat_spec.h 00:02:38.424 TEST_HEADER include/spdk/ioat.h 00:02:38.424 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.424 TEST_HEADER include/spdk/json.h 00:02:38.424 TEST_HEADER include/spdk/jsonrpc.h 00:02:38.424 TEST_HEADER include/spdk/likely.h 00:02:38.424 TEST_HEADER include/spdk/log.h 00:02:38.424 TEST_HEADER include/spdk/lvol.h 00:02:38.424 TEST_HEADER include/spdk/memory.h 00:02:38.424 TEST_HEADER include/spdk/mmio.h 00:02:38.424 TEST_HEADER include/spdk/notify.h 00:02:38.424 TEST_HEADER include/spdk/nbd.h 00:02:38.424 TEST_HEADER include/spdk/nvme.h 00:02:38.424 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:38.424 TEST_HEADER include/spdk/nvme_intel.h 00:02:38.424 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:38.424 TEST_HEADER include/spdk/nvme_spec.h 00:02:38.424 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:38.424 TEST_HEADER include/spdk/nvme_zns.h 00:02:38.424 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:38.424 TEST_HEADER include/spdk/nvmf.h 00:02:38.424 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:38.424 TEST_HEADER include/spdk/nvmf_spec.h 00:02:38.424 TEST_HEADER include/spdk/nvmf_transport.h 00:02:38.424 TEST_HEADER include/spdk/opal.h 00:02:38.424 TEST_HEADER include/spdk/opal_spec.h 00:02:38.424 TEST_HEADER include/spdk/pipe.h 00:02:38.424 TEST_HEADER include/spdk/pci_ids.h 00:02:38.424 CC app/spdk_dd/spdk_dd.o 00:02:38.424 TEST_HEADER include/spdk/reduce.h 00:02:38.424 TEST_HEADER include/spdk/queue.h 00:02:38.424 TEST_HEADER include/spdk/rpc.h 00:02:38.424 TEST_HEADER include/spdk/scheduler.h 00:02:38.424 TEST_HEADER include/spdk/scsi.h 00:02:38.424 CC app/nvmf_tgt/nvmf_main.o 00:02:38.424 TEST_HEADER include/spdk/scsi_spec.h 00:02:38.424 TEST_HEADER include/spdk/sock.h 00:02:38.424 CC app/iscsi_tgt/iscsi_tgt.o 00:02:38.424 TEST_HEADER include/spdk/stdinc.h 00:02:38.424 TEST_HEADER include/spdk/thread.h 00:02:38.424 TEST_HEADER include/spdk/string.h 00:02:38.424 TEST_HEADER include/spdk/trace.h 00:02:38.424 TEST_HEADER include/spdk/tree.h 00:02:38.424 TEST_HEADER include/spdk/trace_parser.h 00:02:38.424 TEST_HEADER include/spdk/util.h 00:02:38.424 TEST_HEADER include/spdk/ublk.h 00:02:38.424 TEST_HEADER include/spdk/uuid.h 00:02:38.424 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:38.424 TEST_HEADER include/spdk/version.h 00:02:38.424 CC app/vhost/vhost.o 00:02:38.424 TEST_HEADER include/spdk/vhost.h 00:02:38.424 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:38.424 TEST_HEADER include/spdk/xor.h 00:02:38.424 TEST_HEADER include/spdk/vmd.h 00:02:38.424 TEST_HEADER include/spdk/zipf.h 00:02:38.424 CC app/spdk_tgt/spdk_tgt.o 00:02:38.424 CXX test/cpp_headers/accel.o 00:02:38.424 CXX test/cpp_headers/accel_module.o 00:02:38.424 CXX test/cpp_headers/assert.o 00:02:38.424 CXX test/cpp_headers/barrier.o 00:02:38.424 CXX test/cpp_headers/bdev.o 00:02:38.424 CXX test/cpp_headers/base64.o 00:02:38.424 CXX test/cpp_headers/bdev_module.o 00:02:38.424 CXX test/cpp_headers/bdev_zone.o 00:02:38.424 CXX test/cpp_headers/bit_array.o 00:02:38.424 CXX test/cpp_headers/bit_pool.o 00:02:38.424 CXX test/cpp_headers/blobfs.o 00:02:38.424 CXX test/cpp_headers/blobfs_bdev.o 00:02:38.424 CXX test/cpp_headers/blob_bdev.o 00:02:38.424 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:38.424 CXX test/cpp_headers/blob.o 00:02:38.424 CXX test/cpp_headers/conf.o 00:02:38.424 CXX test/cpp_headers/config.o 00:02:38.424 CXX test/cpp_headers/cpuset.o 00:02:38.424 CXX test/cpp_headers/crc16.o 00:02:38.424 CC test/env/pci/pci_ut.o 00:02:38.424 CC test/env/vtophys/vtophys.o 00:02:38.424 CXX test/cpp_headers/crc32.o 00:02:38.424 CXX test/cpp_headers/dif.o 00:02:38.424 CXX test/cpp_headers/crc64.o 00:02:38.424 CXX test/cpp_headers/dma.o 00:02:38.424 CXX test/cpp_headers/endian.o 00:02:38.424 CXX test/cpp_headers/env_dpdk.o 00:02:38.424 CXX test/cpp_headers/env.o 00:02:38.424 CC test/app/histogram_perf/histogram_perf.o 00:02:38.424 CXX test/cpp_headers/fd_group.o 00:02:38.424 CXX test/cpp_headers/event.o 00:02:38.424 CXX test/cpp_headers/fd.o 00:02:38.424 CXX test/cpp_headers/file.o 00:02:38.424 CXX test/cpp_headers/gpt_spec.o 00:02:38.424 CXX test/cpp_headers/ftl.o 00:02:38.424 CC test/thread/poller_perf/poller_perf.o 00:02:38.424 CC test/env/memory/memory_ut.o 00:02:38.424 CXX test/cpp_headers/hexlify.o 00:02:38.424 CC test/nvme/sgl/sgl.o 00:02:38.424 CC test/event/event_perf/event_perf.o 00:02:38.424 CC test/app/stub/stub.o 00:02:38.424 CXX test/cpp_headers/histogram_data.o 00:02:38.424 CC test/nvme/aer/aer.o 00:02:38.424 CXX test/cpp_headers/idxd.o 00:02:38.424 CC test/event/reactor/reactor.o 00:02:38.424 CXX test/cpp_headers/idxd_spec.o 00:02:38.424 CXX test/cpp_headers/init.o 00:02:38.424 CC test/event/reactor_perf/reactor_perf.o 00:02:38.424 CC test/nvme/startup/startup.o 00:02:38.424 CC test/app/jsoncat/jsoncat.o 00:02:38.424 CC test/nvme/connect_stress/connect_stress.o 00:02:38.424 CC test/nvme/fused_ordering/fused_ordering.o 00:02:38.424 CC test/nvme/e2edp/nvme_dp.o 00:02:38.424 CC test/nvme/err_injection/err_injection.o 00:02:38.424 CC examples/util/zipf/zipf.o 00:02:38.424 CC examples/accel/perf/accel_perf.o 00:02:38.424 CC test/nvme/overhead/overhead.o 00:02:38.424 CC test/nvme/simple_copy/simple_copy.o 00:02:38.424 CC examples/sock/hello_world/hello_sock.o 00:02:38.424 CC test/nvme/compliance/nvme_compliance.o 00:02:38.424 CC test/nvme/reset/reset.o 00:02:38.424 CC test/nvme/reserve/reserve.o 00:02:38.424 CC examples/nvme/reconnect/reconnect.o 00:02:38.424 CC examples/idxd/perf/perf.o 00:02:38.424 CC test/nvme/boot_partition/boot_partition.o 00:02:38.424 CC test/nvme/cuse/cuse.o 00:02:38.424 CC examples/vmd/lsvmd/lsvmd.o 00:02:38.424 CC test/app/bdev_svc/bdev_svc.o 00:02:38.424 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:38.424 CC test/blobfs/mkfs/mkfs.o 00:02:38.424 CC examples/nvme/hello_world/hello_world.o 00:02:38.424 CC test/accel/dif/dif.o 00:02:38.424 CC app/fio/nvme/fio_plugin.o 00:02:38.424 CC test/event/app_repeat/app_repeat.o 00:02:38.424 CC examples/nvme/hotplug/hotplug.o 00:02:38.424 CC test/nvme/fdp/fdp.o 00:02:38.424 CC test/dma/test_dma/test_dma.o 00:02:38.424 CC examples/nvme/arbitration/arbitration.o 00:02:38.424 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:38.424 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:38.424 CC examples/ioat/verify/verify.o 00:02:38.424 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:38.424 CC examples/nvme/abort/abort.o 00:02:38.424 CC examples/blob/hello_world/hello_blob.o 00:02:38.424 CC examples/vmd/led/led.o 00:02:38.424 CC examples/blob/cli/blobcli.o 00:02:38.424 CC examples/ioat/perf/perf.o 00:02:38.424 CC test/event/scheduler/scheduler.o 00:02:38.424 CXX test/cpp_headers/ioat.o 00:02:38.424 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.424 CC examples/thread/thread/thread_ex.o 00:02:38.424 CC test/bdev/bdevio/bdevio.o 00:02:38.424 CC examples/nvmf/nvmf/nvmf.o 00:02:38.424 CC examples/bdev/bdevperf/bdevperf.o 00:02:38.424 CC app/fio/bdev/fio_plugin.o 00:02:38.690 CC test/env/mem_callbacks/mem_callbacks.o 00:02:38.690 CC test/lvol/esnap/esnap.o 00:02:38.690 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:38.690 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:38.690 LINK spdk_lspci 00:02:38.690 LINK rpc_client_test 00:02:38.690 LINK spdk_nvme_discover 00:02:38.958 LINK jsoncat 00:02:38.958 LINK nvmf_tgt 00:02:38.958 LINK interrupt_tgt 00:02:38.958 LINK env_dpdk_post_init 00:02:38.958 LINK reactor 00:02:38.958 LINK vtophys 00:02:38.958 LINK histogram_perf 00:02:38.958 LINK iscsi_tgt 00:02:38.958 LINK vhost 00:02:38.958 LINK poller_perf 00:02:38.958 LINK event_perf 00:02:38.958 LINK reactor_perf 00:02:38.958 LINK stub 00:02:38.958 LINK startup 00:02:38.958 LINK zipf 00:02:38.958 LINK lsvmd 00:02:38.958 LINK boot_partition 00:02:38.958 LINK led 00:02:38.958 LINK err_injection 00:02:38.958 LINK app_repeat 00:02:38.958 LINK bdev_svc 00:02:38.958 LINK spdk_trace_record 00:02:38.958 LINK doorbell_aers 00:02:38.958 LINK mkfs 00:02:38.958 LINK spdk_tgt 00:02:38.958 LINK fused_ordering 00:02:38.958 LINK reserve 00:02:38.958 LINK pmr_persistence 00:02:38.958 LINK connect_stress 00:02:38.958 LINK cmb_copy 00:02:38.958 CXX test/cpp_headers/ioat_spec.o 00:02:38.958 LINK simple_copy 00:02:38.958 CXX test/cpp_headers/iscsi_spec.o 00:02:38.958 LINK hello_world 00:02:38.958 LINK verify 00:02:38.958 CXX test/cpp_headers/json.o 00:02:38.958 CXX test/cpp_headers/jsonrpc.o 00:02:38.958 CXX test/cpp_headers/likely.o 00:02:38.958 CXX test/cpp_headers/log.o 00:02:38.958 CXX test/cpp_headers/lvol.o 00:02:38.958 CXX test/cpp_headers/memory.o 00:02:38.958 CXX test/cpp_headers/mmio.o 00:02:38.958 CXX test/cpp_headers/nbd.o 00:02:38.958 CXX test/cpp_headers/notify.o 00:02:38.958 CXX test/cpp_headers/nvme.o 00:02:38.958 CXX test/cpp_headers/nvme_intel.o 00:02:38.958 CXX test/cpp_headers/nvme_ocssd.o 00:02:38.958 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:38.958 CXX test/cpp_headers/nvme_spec.o 00:02:38.958 CXX test/cpp_headers/nvme_zns.o 00:02:38.958 CXX test/cpp_headers/nvmf_cmd.o 00:02:38.958 LINK sgl 00:02:38.958 LINK hello_bdev 00:02:38.958 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:38.958 CXX test/cpp_headers/nvmf_spec.o 00:02:38.958 CXX test/cpp_headers/nvmf.o 00:02:38.958 CXX test/cpp_headers/nvmf_transport.o 00:02:38.958 CXX test/cpp_headers/opal.o 00:02:38.958 LINK ioat_perf 00:02:38.958 CXX test/cpp_headers/opal_spec.o 00:02:38.958 CXX test/cpp_headers/pci_ids.o 00:02:38.958 CXX test/cpp_headers/pipe.o 00:02:38.958 LINK scheduler 00:02:38.958 CXX test/cpp_headers/queue.o 00:02:38.958 CXX test/cpp_headers/reduce.o 00:02:39.247 LINK hello_blob 00:02:39.247 CXX test/cpp_headers/rpc.o 00:02:39.247 CXX test/cpp_headers/scheduler.o 00:02:39.247 CXX test/cpp_headers/scsi.o 00:02:39.247 CXX test/cpp_headers/scsi_spec.o 00:02:39.247 LINK hotplug 00:02:39.247 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.247 CXX test/cpp_headers/sock.o 00:02:39.247 LINK reset 00:02:39.247 LINK overhead 00:02:39.247 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.247 LINK hello_sock 00:02:39.247 CXX test/cpp_headers/stdinc.o 00:02:39.247 CXX test/cpp_headers/string.o 00:02:39.247 CXX test/cpp_headers/thread.o 00:02:39.247 LINK thread 00:02:39.247 LINK nvme_dp 00:02:39.247 LINK aer 00:02:39.247 CXX test/cpp_headers/trace.o 00:02:39.247 CXX test/cpp_headers/trace_parser.o 00:02:39.247 LINK spdk_dd 00:02:39.247 CXX test/cpp_headers/tree.o 00:02:39.247 LINK fdp 00:02:39.247 LINK nvme_compliance 00:02:39.247 LINK nvmf 00:02:39.247 LINK idxd_perf 00:02:39.247 CXX test/cpp_headers/ublk.o 00:02:39.247 LINK arbitration 00:02:39.247 CXX test/cpp_headers/util.o 00:02:39.247 LINK test_dma 00:02:39.247 CXX test/cpp_headers/uuid.o 00:02:39.247 CXX test/cpp_headers/version.o 00:02:39.247 CXX test/cpp_headers/vfio_user_spec.o 00:02:39.247 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.247 CXX test/cpp_headers/vhost.o 00:02:39.247 LINK pci_ut 00:02:39.247 CXX test/cpp_headers/vmd.o 00:02:39.247 CXX test/cpp_headers/xor.o 00:02:39.247 CXX test/cpp_headers/zipf.o 00:02:39.247 LINK reconnect 00:02:39.247 LINK dif 00:02:39.247 LINK bdevio 00:02:39.247 LINK spdk_trace 00:02:39.507 LINK abort 00:02:39.507 LINK accel_perf 00:02:39.507 LINK blobcli 00:02:39.508 LINK nvme_fuzz 00:02:39.508 LINK spdk_nvme 00:02:39.508 LINK nvme_manage 00:02:39.508 LINK spdk_bdev 00:02:39.766 LINK mem_callbacks 00:02:39.766 LINK bdevperf 00:02:39.766 LINK vhost_fuzz 00:02:39.766 LINK spdk_nvme_perf 00:02:39.766 LINK spdk_nvme_identify 00:02:39.766 LINK spdk_top 00:02:39.766 LINK memory_ut 00:02:39.766 LINK cuse 00:02:40.335 LINK iscsi_fuzz 00:02:42.240 LINK esnap 00:02:42.499 00:02:42.499 real 0m44.784s 00:02:42.499 user 6m11.679s 00:02:42.499 sys 3m39.211s 00:02:42.499 23:01:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:42.499 23:01:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.499 ************************************ 00:02:42.499 END TEST make 00:02:42.499 ************************************ 00:02:42.499 23:01:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:42.499 23:01:48 -- nvmf/common.sh@7 -- # uname -s 00:02:42.758 23:01:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:42.758 23:01:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:42.758 23:01:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:42.758 23:01:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:42.758 23:01:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:42.758 23:01:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:42.758 23:01:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:42.758 23:01:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:42.758 23:01:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:42.758 23:01:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:42.758 23:01:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:42.758 23:01:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:42.758 23:01:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:42.758 23:01:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:42.758 23:01:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:42.758 23:01:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:42.758 23:01:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:42.758 23:01:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.758 23:01:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.758 23:01:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.758 23:01:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.758 23:01:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.758 23:01:48 -- paths/export.sh@5 -- # export PATH 00:02:42.758 23:01:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.758 23:01:48 -- nvmf/common.sh@46 -- # : 0 00:02:42.758 23:01:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:42.758 23:01:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:42.758 23:01:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:42.758 23:01:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:42.758 23:01:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:42.758 23:01:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:42.758 23:01:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:42.758 23:01:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:42.758 23:01:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:42.758 23:01:48 -- spdk/autotest.sh@32 -- # uname -s 00:02:42.758 23:01:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:42.758 23:01:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:42.758 23:01:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:42.758 23:01:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:42.758 23:01:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:42.758 23:01:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:42.758 23:01:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:42.758 23:01:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:42.758 23:01:48 -- spdk/autotest.sh@48 -- # udevadm_pid=388792 00:02:42.758 23:01:48 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:42.758 23:01:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:42.758 23:01:48 -- spdk/autotest.sh@54 -- # echo 388794 00:02:42.758 23:01:48 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:42.758 23:01:48 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:42.758 23:01:48 -- spdk/autotest.sh@56 -- # echo 388795 00:02:42.759 23:01:48 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:42.759 23:01:48 -- spdk/autotest.sh@60 -- # echo 388796 00:02:42.759 23:01:48 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:42.759 23:01:48 -- spdk/autotest.sh@62 -- # echo 388797 00:02:42.759 23:01:48 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:42.759 23:01:48 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.759 23:01:48 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:42.759 23:01:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:42.759 23:01:48 -- common/autotest_common.sh@10 -- # set +x 00:02:42.759 23:01:48 -- spdk/autotest.sh@70 -- # create_test_list 00:02:42.759 23:01:48 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:42.759 23:01:48 -- common/autotest_common.sh@10 -- # set +x 00:02:42.759 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:42.759 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:42.759 23:01:48 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:42.759 23:01:48 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:42.759 23:01:48 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:42.759 23:01:48 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:42.759 23:01:48 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:42.759 23:01:48 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:42.759 23:01:48 -- common/autotest_common.sh@1440 -- # uname 00:02:42.759 23:01:48 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:42.759 23:01:48 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:42.759 23:01:48 -- common/autotest_common.sh@1460 -- # uname 00:02:42.759 23:01:48 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:42.759 23:01:48 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:42.759 23:01:48 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:42.759 23:01:48 -- spdk/autotest.sh@83 -- # hash lcov 00:02:42.759 23:01:48 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:42.759 23:01:48 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:42.759 --rc lcov_branch_coverage=1 00:02:42.759 --rc lcov_function_coverage=1 00:02:42.759 --rc genhtml_branch_coverage=1 00:02:42.759 --rc genhtml_function_coverage=1 00:02:42.759 --rc genhtml_legend=1 00:02:42.759 --rc geninfo_all_blocks=1 00:02:42.759 ' 00:02:42.759 23:01:48 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:42.759 --rc lcov_branch_coverage=1 00:02:42.759 --rc lcov_function_coverage=1 00:02:42.759 --rc genhtml_branch_coverage=1 00:02:42.759 --rc genhtml_function_coverage=1 00:02:42.759 --rc genhtml_legend=1 00:02:42.759 --rc geninfo_all_blocks=1 00:02:42.759 ' 00:02:42.759 23:01:48 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:42.759 --rc lcov_branch_coverage=1 00:02:42.759 --rc lcov_function_coverage=1 00:02:42.759 --rc genhtml_branch_coverage=1 00:02:42.759 --rc genhtml_function_coverage=1 00:02:42.759 --rc genhtml_legend=1 00:02:42.759 --rc geninfo_all_blocks=1 00:02:42.759 --no-external' 00:02:42.759 23:01:48 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:42.759 --rc lcov_branch_coverage=1 00:02:42.759 --rc lcov_function_coverage=1 00:02:42.759 --rc genhtml_branch_coverage=1 00:02:42.759 --rc genhtml_function_coverage=1 00:02:42.759 --rc genhtml_legend=1 00:02:42.759 --rc geninfo_all_blocks=1 00:02:42.759 --no-external' 00:02:42.759 23:01:48 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:42.759 lcov: LCOV version 1.15 00:02:42.759 23:01:48 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:44.136 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:44.136 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:44.136 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:44.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:44.137 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:44.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:44.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:44.396 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:44.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:44.396 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:44.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:44.396 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:44.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:44.396 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:44.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:44.396 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:44.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:44.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:44.655 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:44.655 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:56.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:56.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:56.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:56.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:56.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:56.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:06.834 23:02:11 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:06.834 23:02:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:06.834 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:03:06.834 23:02:11 -- spdk/autotest.sh@102 -- # rm -f 00:03:06.834 23:02:11 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.371 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:09.371 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:09.371 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:09.371 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:09.372 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:09.372 23:02:15 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:09.372 23:02:15 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:09.372 23:02:15 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:09.372 23:02:15 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:09.372 23:02:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:09.372 23:02:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:09.372 23:02:15 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:09.372 23:02:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.372 23:02:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:09.372 23:02:15 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:09.372 23:02:15 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:09.372 23:02:15 -- spdk/autotest.sh@121 -- # grep -v p 00:03:09.372 23:02:15 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:09.372 23:02:15 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:09.372 23:02:15 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:09.372 23:02:15 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:09.372 23:02:15 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:09.631 No valid GPT data, bailing 00:03:09.631 23:02:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.631 23:02:15 -- scripts/common.sh@393 -- # pt= 00:03:09.631 23:02:15 -- scripts/common.sh@394 -- # return 1 00:03:09.631 23:02:15 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:09.631 1+0 records in 00:03:09.631 1+0 records out 00:03:09.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00151734 s, 691 MB/s 00:03:09.631 23:02:15 -- spdk/autotest.sh@129 -- # sync 00:03:09.631 23:02:15 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:09.631 23:02:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:09.631 23:02:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.204 23:02:21 -- spdk/autotest.sh@135 -- # uname -s 00:03:16.204 23:02:21 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:16.204 23:02:21 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.204 23:02:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.204 23:02:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.204 23:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:16.204 ************************************ 00:03:16.204 START TEST setup.sh 00:03:16.204 ************************************ 00:03:16.204 23:02:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.204 * Looking for test storage... 00:03:16.204 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:16.204 23:02:21 -- setup/test-setup.sh@10 -- # uname -s 00:03:16.204 23:02:21 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.204 23:02:21 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:16.204 23:02:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.204 23:02:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.204 23:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:16.204 ************************************ 00:03:16.204 START TEST acl 00:03:16.204 ************************************ 00:03:16.204 23:02:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:16.463 * Looking for test storage... 00:03:16.463 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:16.463 23:02:22 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.463 23:02:22 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:16.463 23:02:22 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:16.463 23:02:22 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:16.463 23:02:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:16.463 23:02:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:16.463 23:02:22 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:16.463 23:02:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.463 23:02:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:16.463 23:02:22 -- setup/acl.sh@12 -- # devs=() 00:03:16.463 23:02:22 -- setup/acl.sh@12 -- # declare -a devs 00:03:16.463 23:02:22 -- setup/acl.sh@13 -- # drivers=() 00:03:16.463 23:02:22 -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.463 23:02:22 -- setup/acl.sh@51 -- # setup reset 00:03:16.463 23:02:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.463 23:02:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.661 23:02:25 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:20.661 23:02:25 -- setup/acl.sh@16 -- # local dev driver 00:03:20.661 23:02:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.662 23:02:25 -- setup/acl.sh@15 -- # setup output status 00:03:20.662 23:02:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.662 23:02:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:23.200 Hugepages 00:03:23.200 node hugesize free / total 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 00:03:23.200 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.200 23:02:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.200 23:02:28 -- setup/acl.sh@20 -- # continue 00:03:23.200 23:02:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.459 23:02:29 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:23.459 23:02:29 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:23.459 23:02:29 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:23.459 23:02:29 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:23.459 23:02:29 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:23.459 23:02:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.459 23:02:29 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:23.459 23:02:29 -- setup/acl.sh@54 -- # run_test denied denied 00:03:23.459 23:02:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.459 23:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.459 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:03:23.459 ************************************ 00:03:23.459 START TEST denied 00:03:23.459 ************************************ 00:03:23.459 23:02:29 -- common/autotest_common.sh@1104 -- # denied 00:03:23.459 23:02:29 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:23.459 23:02:29 -- setup/acl.sh@38 -- # setup output config 00:03:23.459 23:02:29 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:23.459 23:02:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.459 23:02:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:27.654 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:27.654 23:02:32 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:27.654 23:02:32 -- setup/acl.sh@28 -- # local dev driver 00:03:27.654 23:02:32 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:27.654 23:02:32 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:27.654 23:02:32 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:27.654 23:02:32 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:27.654 23:02:32 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:27.654 23:02:32 -- setup/acl.sh@41 -- # setup reset 00:03:27.654 23:02:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.654 23:02:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.888 00:03:31.888 real 0m7.915s 00:03:31.888 user 0m2.355s 00:03:31.888 sys 0m4.762s 00:03:31.888 23:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.888 23:02:36 -- common/autotest_common.sh@10 -- # set +x 00:03:31.888 ************************************ 00:03:31.888 END TEST denied 00:03:31.888 ************************************ 00:03:31.888 23:02:37 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.888 23:02:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.888 23:02:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.888 23:02:37 -- common/autotest_common.sh@10 -- # set +x 00:03:31.888 ************************************ 00:03:31.888 START TEST allowed 00:03:31.888 ************************************ 00:03:31.888 23:02:37 -- common/autotest_common.sh@1104 -- # allowed 00:03:31.888 23:02:37 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:31.888 23:02:37 -- setup/acl.sh@45 -- # setup output config 00:03:31.888 23:02:37 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:31.888 23:02:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.888 23:02:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:37.172 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.172 23:02:42 -- setup/acl.sh@47 -- # verify 00:03:37.172 23:02:42 -- setup/acl.sh@28 -- # local dev driver 00:03:37.172 23:02:42 -- setup/acl.sh@48 -- # setup reset 00:03:37.172 23:02:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.172 23:02:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.467 00:03:40.467 real 0m9.183s 00:03:40.467 user 0m2.425s 00:03:40.467 sys 0m4.898s 00:03:40.467 23:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.467 23:02:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.467 ************************************ 00:03:40.467 END TEST allowed 00:03:40.467 ************************************ 00:03:40.727 00:03:40.727 real 0m24.304s 00:03:40.727 user 0m7.293s 00:03:40.727 sys 0m14.594s 00:03:40.727 23:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.727 23:02:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.727 ************************************ 00:03:40.727 END TEST acl 00:03:40.727 ************************************ 00:03:40.727 23:02:46 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.727 23:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.727 23:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.727 23:02:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.727 ************************************ 00:03:40.727 START TEST hugepages 00:03:40.727 ************************************ 00:03:40.727 23:02:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.727 * Looking for test storage... 00:03:40.727 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:40.727 23:02:46 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.727 23:02:46 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.727 23:02:46 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.727 23:02:46 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.727 23:02:46 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.727 23:02:46 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.727 23:02:46 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.727 23:02:46 -- setup/common.sh@18 -- # local node= 00:03:40.727 23:02:46 -- setup/common.sh@19 -- # local var val 00:03:40.727 23:02:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.727 23:02:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.727 23:02:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.727 23:02:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.727 23:02:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.727 23:02:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 41135788 kB' 'MemAvailable: 44867700 kB' 'Buffers: 4100 kB' 'Cached: 10641452 kB' 'SwapCached: 0 kB' 'Active: 7421568 kB' 'Inactive: 3704024 kB' 'Active(anon): 7024396 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483612 kB' 'Mapped: 219656 kB' 'Shmem: 6544356 kB' 'KReclaimable: 255164 kB' 'Slab: 1185016 kB' 'SReclaimable: 255164 kB' 'SUnreclaim: 929852 kB' 'KernelStack: 22048 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433352 kB' 'Committed_AS: 8267564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 23:02:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # continue 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 23:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 23:02:46 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.728 23:02:46 -- setup/common.sh@33 -- # echo 2048 00:03:40.728 23:02:46 -- setup/common.sh@33 -- # return 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.729 23:02:46 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.729 23:02:46 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.729 23:02:46 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:40.729 23:02:46 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:40.729 23:02:46 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:40.729 23:02:46 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:40.729 23:02:46 -- setup/hugepages.sh@207 -- # get_nodes 00:03:40.729 23:02:46 -- setup/hugepages.sh@27 -- # local node 00:03:40.729 23:02:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.729 23:02:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:40.729 23:02:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.729 23:02:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.729 23:02:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.729 23:02:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.729 23:02:46 -- setup/hugepages.sh@208 -- # clear_hp 00:03:40.729 23:02:46 -- setup/hugepages.sh@37 -- # local node hp 00:03:40.729 23:02:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.729 23:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.729 23:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.729 23:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.729 23:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.729 23:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.729 23:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.729 23:02:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.729 23:02:46 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:40.729 23:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.729 23:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.729 23:02:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.729 ************************************ 00:03:40.729 START TEST default_setup 00:03:40.729 ************************************ 00:03:40.729 23:02:46 -- common/autotest_common.sh@1104 -- # default_setup 00:03:40.729 23:02:46 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.729 23:02:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.729 23:02:46 -- setup/hugepages.sh@51 -- # shift 00:03:40.729 23:02:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.729 23:02:46 -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.729 23:02:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.729 23:02:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.729 23:02:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.729 23:02:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.729 23:02:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.729 23:02:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.729 23:02:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.729 23:02:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.729 23:02:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.729 23:02:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.729 23:02:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.729 23:02:46 -- setup/hugepages.sh@73 -- # return 0 00:03:40.729 23:02:46 -- setup/hugepages.sh@137 -- # setup output 00:03:40.729 23:02:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.729 23:02:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:44.024 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:44.024 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:44.024 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:44.024 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:44.024 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:44.024 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.283 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.193 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:46.193 23:02:51 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:46.193 23:02:51 -- setup/hugepages.sh@89 -- # local node 00:03:46.193 23:02:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.193 23:02:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.193 23:02:51 -- setup/hugepages.sh@92 -- # local surp 00:03:46.193 23:02:51 -- setup/hugepages.sh@93 -- # local resv 00:03:46.193 23:02:51 -- setup/hugepages.sh@94 -- # local anon 00:03:46.193 23:02:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.193 23:02:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.193 23:02:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.193 23:02:51 -- setup/common.sh@18 -- # local node= 00:03:46.193 23:02:51 -- setup/common.sh@19 -- # local var val 00:03:46.193 23:02:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.193 23:02:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.193 23:02:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.193 23:02:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.193 23:02:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.193 23:02:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.193 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 23:02:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43283272 kB' 'MemAvailable: 47015056 kB' 'Buffers: 4100 kB' 'Cached: 10641592 kB' 'SwapCached: 0 kB' 'Active: 7435752 kB' 'Inactive: 3704024 kB' 'Active(anon): 7038580 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497344 kB' 'Mapped: 219844 kB' 'Shmem: 6544496 kB' 'KReclaimable: 254908 kB' 'Slab: 1183272 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928364 kB' 'KernelStack: 22368 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8282724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218092 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:46.193 23:02:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.193 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.193 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.193 23:02:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.193 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.193 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.193 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.194 23:02:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.194 23:02:51 -- setup/common.sh@33 -- # echo 0 00:03:46.194 23:02:51 -- setup/common.sh@33 -- # return 0 00:03:46.194 23:02:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.194 23:02:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.194 23:02:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.194 23:02:51 -- setup/common.sh@18 -- # local node= 00:03:46.194 23:02:51 -- setup/common.sh@19 -- # local var val 00:03:46.194 23:02:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.194 23:02:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.194 23:02:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.194 23:02:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.194 23:02:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.194 23:02:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.194 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43288620 kB' 'MemAvailable: 47020404 kB' 'Buffers: 4100 kB' 'Cached: 10641596 kB' 'SwapCached: 0 kB' 'Active: 7435308 kB' 'Inactive: 3704024 kB' 'Active(anon): 7038136 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496800 kB' 'Mapped: 219804 kB' 'Shmem: 6544500 kB' 'KReclaimable: 254908 kB' 'Slab: 1183440 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928532 kB' 'KernelStack: 22272 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8282736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.195 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.195 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.196 23:02:51 -- setup/common.sh@33 -- # echo 0 00:03:46.196 23:02:51 -- setup/common.sh@33 -- # return 0 00:03:46.196 23:02:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.196 23:02:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.196 23:02:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.196 23:02:51 -- setup/common.sh@18 -- # local node= 00:03:46.196 23:02:51 -- setup/common.sh@19 -- # local var val 00:03:46.196 23:02:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.196 23:02:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.196 23:02:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.196 23:02:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.196 23:02:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.196 23:02:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43288976 kB' 'MemAvailable: 47020760 kB' 'Buffers: 4100 kB' 'Cached: 10641608 kB' 'SwapCached: 0 kB' 'Active: 7434916 kB' 'Inactive: 3704024 kB' 'Active(anon): 7037744 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496756 kB' 'Mapped: 219724 kB' 'Shmem: 6544512 kB' 'KReclaimable: 254908 kB' 'Slab: 1183224 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928316 kB' 'KernelStack: 22208 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8282752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.196 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.196 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.197 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.197 23:02:51 -- setup/common.sh@33 -- # echo 0 00:03:46.197 23:02:51 -- setup/common.sh@33 -- # return 0 00:03:46.197 23:02:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.197 23:02:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.197 nr_hugepages=1024 00:03:46.197 23:02:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.197 resv_hugepages=0 00:03:46.197 23:02:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.197 surplus_hugepages=0 00:03:46.197 23:02:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.197 anon_hugepages=0 00:03:46.197 23:02:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.197 23:02:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.197 23:02:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.197 23:02:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.197 23:02:51 -- setup/common.sh@18 -- # local node= 00:03:46.197 23:02:51 -- setup/common.sh@19 -- # local var val 00:03:46.197 23:02:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.197 23:02:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.197 23:02:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.197 23:02:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.197 23:02:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.197 23:02:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.197 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43289868 kB' 'MemAvailable: 47021652 kB' 'Buffers: 4100 kB' 'Cached: 10641612 kB' 'SwapCached: 0 kB' 'Active: 7435024 kB' 'Inactive: 3704024 kB' 'Active(anon): 7037852 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496864 kB' 'Mapped: 219724 kB' 'Shmem: 6544516 kB' 'KReclaimable: 254908 kB' 'Slab: 1183200 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928292 kB' 'KernelStack: 22176 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8282768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218076 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.198 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.198 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.459 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.459 23:02:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.459 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.459 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.459 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.459 23:02:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.459 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.459 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.460 23:02:51 -- setup/common.sh@33 -- # echo 1024 00:03:46.460 23:02:51 -- setup/common.sh@33 -- # return 0 00:03:46.460 23:02:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.460 23:02:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.460 23:02:51 -- setup/hugepages.sh@27 -- # local node 00:03:46.460 23:02:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.460 23:02:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.460 23:02:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.460 23:02:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.460 23:02:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.460 23:02:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.460 23:02:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.460 23:02:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.460 23:02:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.460 23:02:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.460 23:02:51 -- setup/common.sh@18 -- # local node=0 00:03:46.460 23:02:51 -- setup/common.sh@19 -- # local var val 00:03:46.460 23:02:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.460 23:02:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.460 23:02:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.460 23:02:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.460 23:02:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.460 23:02:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26465396 kB' 'MemUsed: 6119972 kB' 'SwapCached: 0 kB' 'Active: 2183808 kB' 'Inactive: 166356 kB' 'Active(anon): 2057924 kB' 'Inactive(anon): 0 kB' 'Active(file): 125884 kB' 'Inactive(file): 166356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2109932 kB' 'Mapped: 112108 kB' 'AnonPages: 243540 kB' 'Shmem: 1817692 kB' 'KernelStack: 11448 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138324 kB' 'Slab: 574988 kB' 'SReclaimable: 138324 kB' 'SUnreclaim: 436664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.460 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 23:02:51 -- setup/common.sh@32 -- # continue 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 23:02:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 23:02:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 23:02:51 -- setup/common.sh@33 -- # echo 0 00:03:46.462 23:02:51 -- setup/common.sh@33 -- # return 0 00:03:46.462 23:02:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.462 23:02:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.462 23:02:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.462 23:02:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.462 23:02:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.462 node0=1024 expecting 1024 00:03:46.462 23:02:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.462 00:03:46.462 real 0m5.527s 00:03:46.462 user 0m1.316s 00:03:46.462 sys 0m2.351s 00:03:46.462 23:02:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.462 23:02:51 -- common/autotest_common.sh@10 -- # set +x 00:03:46.462 ************************************ 00:03:46.462 END TEST default_setup 00:03:46.462 ************************************ 00:03:46.462 23:02:52 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.462 23:02:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:46.462 23:02:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.462 23:02:52 -- common/autotest_common.sh@10 -- # set +x 00:03:46.462 ************************************ 00:03:46.462 START TEST per_node_1G_alloc 00:03:46.462 ************************************ 00:03:46.462 23:02:52 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:46.462 23:02:52 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.462 23:02:52 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:46.462 23:02:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.462 23:02:52 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:46.462 23:02:52 -- setup/hugepages.sh@51 -- # shift 00:03:46.462 23:02:52 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:46.462 23:02:52 -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.462 23:02:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.462 23:02:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.462 23:02:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:46.462 23:02:52 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:46.462 23:02:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.462 23:02:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.462 23:02:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.462 23:02:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.462 23:02:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.462 23:02:52 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:46.462 23:02:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.462 23:02:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.462 23:02:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.462 23:02:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.462 23:02:52 -- setup/hugepages.sh@73 -- # return 0 00:03:46.462 23:02:52 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.462 23:02:52 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:46.462 23:02:52 -- setup/hugepages.sh@146 -- # setup output 00:03:46.462 23:02:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.462 23:02:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:49.753 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.753 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.016 23:02:55 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:50.016 23:02:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:50.016 23:02:55 -- setup/hugepages.sh@89 -- # local node 00:03:50.016 23:02:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.016 23:02:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.016 23:02:55 -- setup/hugepages.sh@92 -- # local surp 00:03:50.016 23:02:55 -- setup/hugepages.sh@93 -- # local resv 00:03:50.016 23:02:55 -- setup/hugepages.sh@94 -- # local anon 00:03:50.016 23:02:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.016 23:02:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.016 23:02:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.016 23:02:55 -- setup/common.sh@18 -- # local node= 00:03:50.016 23:02:55 -- setup/common.sh@19 -- # local var val 00:03:50.016 23:02:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.016 23:02:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.016 23:02:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.016 23:02:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.016 23:02:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.016 23:02:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.016 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.016 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43275192 kB' 'MemAvailable: 47006976 kB' 'Buffers: 4100 kB' 'Cached: 10641708 kB' 'SwapCached: 0 kB' 'Active: 7434248 kB' 'Inactive: 3704024 kB' 'Active(anon): 7037076 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495872 kB' 'Mapped: 218780 kB' 'Shmem: 6544612 kB' 'KReclaimable: 254908 kB' 'Slab: 1183528 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928620 kB' 'KernelStack: 22048 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8271048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.017 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.017 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 23:02:55 -- setup/common.sh@33 -- # echo 0 00:03:50.018 23:02:55 -- setup/common.sh@33 -- # return 0 00:03:50.018 23:02:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.018 23:02:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.018 23:02:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.018 23:02:55 -- setup/common.sh@18 -- # local node= 00:03:50.018 23:02:55 -- setup/common.sh@19 -- # local var val 00:03:50.018 23:02:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.018 23:02:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.018 23:02:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.018 23:02:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.018 23:02:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.018 23:02:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43278644 kB' 'MemAvailable: 47010428 kB' 'Buffers: 4100 kB' 'Cached: 10641712 kB' 'SwapCached: 0 kB' 'Active: 7434496 kB' 'Inactive: 3704024 kB' 'Active(anon): 7037324 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496164 kB' 'Mapped: 218656 kB' 'Shmem: 6544616 kB' 'KReclaimable: 254908 kB' 'Slab: 1183448 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928540 kB' 'KernelStack: 22032 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8271060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.018 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 23:02:55 -- setup/common.sh@33 -- # echo 0 00:03:50.019 23:02:55 -- setup/common.sh@33 -- # return 0 00:03:50.019 23:02:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.019 23:02:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.019 23:02:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.019 23:02:55 -- setup/common.sh@18 -- # local node= 00:03:50.019 23:02:55 -- setup/common.sh@19 -- # local var val 00:03:50.019 23:02:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.019 23:02:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.019 23:02:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.019 23:02:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.019 23:02:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.019 23:02:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43277888 kB' 'MemAvailable: 47009672 kB' 'Buffers: 4100 kB' 'Cached: 10641728 kB' 'SwapCached: 0 kB' 'Active: 7434216 kB' 'Inactive: 3704024 kB' 'Active(anon): 7037044 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495796 kB' 'Mapped: 218656 kB' 'Shmem: 6544632 kB' 'KReclaimable: 254908 kB' 'Slab: 1183448 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928540 kB' 'KernelStack: 22000 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8271076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 23:02:55 -- setup/common.sh@33 -- # echo 0 00:03:50.021 23:02:55 -- setup/common.sh@33 -- # return 0 00:03:50.021 23:02:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.021 23:02:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.021 nr_hugepages=1024 00:03:50.021 23:02:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.021 resv_hugepages=0 00:03:50.021 23:02:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.021 surplus_hugepages=0 00:03:50.021 23:02:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.021 anon_hugepages=0 00:03:50.021 23:02:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.021 23:02:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.021 23:02:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.021 23:02:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.021 23:02:55 -- setup/common.sh@18 -- # local node= 00:03:50.021 23:02:55 -- setup/common.sh@19 -- # local var val 00:03:50.021 23:02:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.021 23:02:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.021 23:02:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.021 23:02:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.021 23:02:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.021 23:02:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43277920 kB' 'MemAvailable: 47009704 kB' 'Buffers: 4100 kB' 'Cached: 10641748 kB' 'SwapCached: 0 kB' 'Active: 7434224 kB' 'Inactive: 3704024 kB' 'Active(anon): 7037052 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495792 kB' 'Mapped: 218656 kB' 'Shmem: 6544652 kB' 'KReclaimable: 254908 kB' 'Slab: 1183448 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928540 kB' 'KernelStack: 22000 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8271096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 23:02:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 23:02:55 -- setup/common.sh@33 -- # echo 1024 00:03:50.022 23:02:55 -- setup/common.sh@33 -- # return 0 00:03:50.022 23:02:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.022 23:02:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.022 23:02:55 -- setup/hugepages.sh@27 -- # local node 00:03:50.022 23:02:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.022 23:02:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.022 23:02:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.022 23:02:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.022 23:02:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.022 23:02:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.022 23:02:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.022 23:02:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.022 23:02:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.022 23:02:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.022 23:02:55 -- setup/common.sh@18 -- # local node=0 00:03:50.022 23:02:55 -- setup/common.sh@19 -- # local var val 00:03:50.022 23:02:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.022 23:02:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.022 23:02:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.022 23:02:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.022 23:02:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.022 23:02:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27498908 kB' 'MemUsed: 5086460 kB' 'SwapCached: 0 kB' 'Active: 2183940 kB' 'Inactive: 166356 kB' 'Active(anon): 2058056 kB' 'Inactive(anon): 0 kB' 'Active(file): 125884 kB' 'Inactive(file): 166356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2109956 kB' 'Mapped: 111456 kB' 'AnonPages: 243656 kB' 'Shmem: 1817716 kB' 'KernelStack: 11352 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138324 kB' 'Slab: 575356 kB' 'SReclaimable: 138324 kB' 'SUnreclaim: 437032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@33 -- # echo 0 00:03:50.023 23:02:55 -- setup/common.sh@33 -- # return 0 00:03:50.023 23:02:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.023 23:02:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.023 23:02:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.023 23:02:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.023 23:02:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.023 23:02:55 -- setup/common.sh@18 -- # local node=1 00:03:50.023 23:02:55 -- setup/common.sh@19 -- # local var val 00:03:50.023 23:02:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.023 23:02:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.023 23:02:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.023 23:02:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.023 23:02:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.023 23:02:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 15780348 kB' 'MemUsed: 11918088 kB' 'SwapCached: 0 kB' 'Active: 5250728 kB' 'Inactive: 3537668 kB' 'Active(anon): 4979440 kB' 'Inactive(anon): 0 kB' 'Active(file): 271288 kB' 'Inactive(file): 3537668 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8535928 kB' 'Mapped: 107200 kB' 'AnonPages: 252560 kB' 'Shmem: 4726972 kB' 'KernelStack: 10680 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116584 kB' 'Slab: 608092 kB' 'SReclaimable: 116584 kB' 'SUnreclaim: 491508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.023 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.024 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.024 23:02:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # continue 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.284 23:02:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.284 23:02:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.284 23:02:55 -- setup/common.sh@33 -- # echo 0 00:03:50.284 23:02:55 -- setup/common.sh@33 -- # return 0 00:03:50.284 23:02:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.284 23:02:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.284 23:02:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.284 23:02:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.284 node0=512 expecting 512 00:03:50.284 23:02:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.284 23:02:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.284 23:02:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.284 23:02:55 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:50.284 node1=512 expecting 512 00:03:50.284 23:02:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:50.284 00:03:50.284 real 0m3.739s 00:03:50.284 user 0m1.410s 00:03:50.284 sys 0m2.391s 00:03:50.284 23:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.284 23:02:55 -- common/autotest_common.sh@10 -- # set +x 00:03:50.284 ************************************ 00:03:50.284 END TEST per_node_1G_alloc 00:03:50.284 ************************************ 00:03:50.284 23:02:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:50.284 23:02:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:50.284 23:02:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:50.284 23:02:55 -- common/autotest_common.sh@10 -- # set +x 00:03:50.284 ************************************ 00:03:50.284 START TEST even_2G_alloc 00:03:50.284 ************************************ 00:03:50.284 23:02:55 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:50.284 23:02:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:50.284 23:02:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.284 23:02:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.284 23:02:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.284 23:02:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.284 23:02:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.284 23:02:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.284 23:02:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.284 23:02:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.284 23:02:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.284 23:02:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.284 23:02:55 -- setup/hugepages.sh@83 -- # : 512 00:03:50.284 23:02:55 -- setup/hugepages.sh@84 -- # : 1 00:03:50.284 23:02:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.284 23:02:55 -- setup/hugepages.sh@83 -- # : 0 00:03:50.284 23:02:55 -- setup/hugepages.sh@84 -- # : 0 00:03:50.284 23:02:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.284 23:02:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:50.284 23:02:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:50.284 23:02:55 -- setup/hugepages.sh@153 -- # setup output 00:03:50.284 23:02:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.285 23:02:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:53.578 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.578 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:53.842 23:02:59 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:53.842 23:02:59 -- setup/hugepages.sh@89 -- # local node 00:03:53.842 23:02:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.842 23:02:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.842 23:02:59 -- setup/hugepages.sh@92 -- # local surp 00:03:53.842 23:02:59 -- setup/hugepages.sh@93 -- # local resv 00:03:53.842 23:02:59 -- setup/hugepages.sh@94 -- # local anon 00:03:53.842 23:02:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.842 23:02:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.842 23:02:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.842 23:02:59 -- setup/common.sh@18 -- # local node= 00:03:53.842 23:02:59 -- setup/common.sh@19 -- # local var val 00:03:53.842 23:02:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.842 23:02:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.842 23:02:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.842 23:02:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.842 23:02:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.842 23:02:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43305324 kB' 'MemAvailable: 47037108 kB' 'Buffers: 4100 kB' 'Cached: 10641852 kB' 'SwapCached: 0 kB' 'Active: 7435576 kB' 'Inactive: 3704024 kB' 'Active(anon): 7038404 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497176 kB' 'Mapped: 218848 kB' 'Shmem: 6544756 kB' 'KReclaimable: 254908 kB' 'Slab: 1183332 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928424 kB' 'KernelStack: 22064 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8275696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.842 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.842 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.843 23:02:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.843 23:02:59 -- setup/common.sh@33 -- # echo 0 00:03:53.843 23:02:59 -- setup/common.sh@33 -- # return 0 00:03:53.843 23:02:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:53.843 23:02:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.843 23:02:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.843 23:02:59 -- setup/common.sh@18 -- # local node= 00:03:53.843 23:02:59 -- setup/common.sh@19 -- # local var val 00:03:53.843 23:02:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.843 23:02:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.843 23:02:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.843 23:02:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.843 23:02:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.843 23:02:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.843 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43309192 kB' 'MemAvailable: 47040976 kB' 'Buffers: 4100 kB' 'Cached: 10641852 kB' 'SwapCached: 0 kB' 'Active: 7436140 kB' 'Inactive: 3704024 kB' 'Active(anon): 7038968 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497808 kB' 'Mapped: 218792 kB' 'Shmem: 6544756 kB' 'KReclaimable: 254908 kB' 'Slab: 1183324 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928416 kB' 'KernelStack: 22128 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8277076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.844 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.844 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.845 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.845 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.846 23:02:59 -- setup/common.sh@33 -- # echo 0 00:03:53.846 23:02:59 -- setup/common.sh@33 -- # return 0 00:03:53.846 23:02:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:53.846 23:02:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.846 23:02:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.846 23:02:59 -- setup/common.sh@18 -- # local node= 00:03:53.846 23:02:59 -- setup/common.sh@19 -- # local var val 00:03:53.846 23:02:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.846 23:02:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.846 23:02:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.846 23:02:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.846 23:02:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.846 23:02:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43308968 kB' 'MemAvailable: 47040752 kB' 'Buffers: 4100 kB' 'Cached: 10641864 kB' 'SwapCached: 0 kB' 'Active: 7436104 kB' 'Inactive: 3704024 kB' 'Active(anon): 7038932 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497620 kB' 'Mapped: 218664 kB' 'Shmem: 6544768 kB' 'KReclaimable: 254908 kB' 'Slab: 1183332 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928424 kB' 'KernelStack: 22208 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8276840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218172 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.846 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.846 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.847 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.847 23:02:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.848 23:02:59 -- setup/common.sh@33 -- # echo 0 00:03:53.848 23:02:59 -- setup/common.sh@33 -- # return 0 00:03:53.848 23:02:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:53.848 23:02:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.848 nr_hugepages=1024 00:03:53.848 23:02:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.848 resv_hugepages=0 00:03:53.848 23:02:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.848 surplus_hugepages=0 00:03:53.848 23:02:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.848 anon_hugepages=0 00:03:53.848 23:02:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.848 23:02:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.848 23:02:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.848 23:02:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.848 23:02:59 -- setup/common.sh@18 -- # local node= 00:03:53.848 23:02:59 -- setup/common.sh@19 -- # local var val 00:03:53.848 23:02:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.848 23:02:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.848 23:02:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.848 23:02:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.848 23:02:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.848 23:02:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43308800 kB' 'MemAvailable: 47040584 kB' 'Buffers: 4100 kB' 'Cached: 10641880 kB' 'SwapCached: 0 kB' 'Active: 7436216 kB' 'Inactive: 3704024 kB' 'Active(anon): 7039044 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497660 kB' 'Mapped: 218664 kB' 'Shmem: 6544784 kB' 'KReclaimable: 254908 kB' 'Slab: 1183332 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928424 kB' 'KernelStack: 22240 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8277104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218204 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.848 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.848 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.849 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.849 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.850 23:02:59 -- setup/common.sh@33 -- # echo 1024 00:03:53.850 23:02:59 -- setup/common.sh@33 -- # return 0 00:03:53.850 23:02:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.850 23:02:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.850 23:02:59 -- setup/hugepages.sh@27 -- # local node 00:03:53.850 23:02:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.850 23:02:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.850 23:02:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.850 23:02:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.850 23:02:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.850 23:02:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.850 23:02:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.850 23:02:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.850 23:02:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.850 23:02:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.850 23:02:59 -- setup/common.sh@18 -- # local node=0 00:03:53.850 23:02:59 -- setup/common.sh@19 -- # local var val 00:03:53.850 23:02:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.850 23:02:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.850 23:02:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.850 23:02:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.850 23:02:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.850 23:02:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27525660 kB' 'MemUsed: 5059708 kB' 'SwapCached: 0 kB' 'Active: 2185676 kB' 'Inactive: 166356 kB' 'Active(anon): 2059792 kB' 'Inactive(anon): 0 kB' 'Active(file): 125884 kB' 'Inactive(file): 166356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110000 kB' 'Mapped: 111460 kB' 'AnonPages: 245284 kB' 'Shmem: 1817760 kB' 'KernelStack: 11528 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138324 kB' 'Slab: 575260 kB' 'SReclaimable: 138324 kB' 'SUnreclaim: 436936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.850 23:02:59 -- setup/common.sh@32 -- # continue 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.850 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.118 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.118 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@33 -- # echo 0 00:03:54.119 23:02:59 -- setup/common.sh@33 -- # return 0 00:03:54.119 23:02:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.119 23:02:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.119 23:02:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.119 23:02:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.119 23:02:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.119 23:02:59 -- setup/common.sh@18 -- # local node=1 00:03:54.119 23:02:59 -- setup/common.sh@19 -- # local var val 00:03:54.119 23:02:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.119 23:02:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.119 23:02:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.119 23:02:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.119 23:02:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.119 23:02:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 15788924 kB' 'MemUsed: 11909512 kB' 'SwapCached: 0 kB' 'Active: 5251176 kB' 'Inactive: 3537668 kB' 'Active(anon): 4979888 kB' 'Inactive(anon): 0 kB' 'Active(file): 271288 kB' 'Inactive(file): 3537668 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8536004 kB' 'Mapped: 107204 kB' 'AnonPages: 253044 kB' 'Shmem: 4727048 kB' 'KernelStack: 10872 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116584 kB' 'Slab: 608040 kB' 'SReclaimable: 116584 kB' 'SUnreclaim: 491456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # continue 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 23:02:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 23:02:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 23:02:59 -- setup/common.sh@33 -- # echo 0 00:03:54.120 23:02:59 -- setup/common.sh@33 -- # return 0 00:03:54.120 23:02:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.120 23:02:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.120 23:02:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.120 23:02:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.120 23:02:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.120 node0=512 expecting 512 00:03:54.120 23:02:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.120 23:02:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.120 23:02:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.120 23:02:59 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.120 node1=512 expecting 512 00:03:54.120 23:02:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.120 00:03:54.120 real 0m3.810s 00:03:54.120 user 0m1.432s 00:03:54.120 sys 0m2.448s 00:03:54.120 23:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.120 23:02:59 -- common/autotest_common.sh@10 -- # set +x 00:03:54.120 ************************************ 00:03:54.120 END TEST even_2G_alloc 00:03:54.120 ************************************ 00:03:54.120 23:02:59 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:54.120 23:02:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.120 23:02:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.120 23:02:59 -- common/autotest_common.sh@10 -- # set +x 00:03:54.120 ************************************ 00:03:54.120 START TEST odd_alloc 00:03:54.120 ************************************ 00:03:54.120 23:02:59 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:54.120 23:02:59 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:54.120 23:02:59 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:54.120 23:02:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.120 23:02:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.120 23:02:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:54.120 23:02:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.120 23:02:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.120 23:02:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.120 23:02:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:54.120 23:02:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.120 23:02:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.120 23:02:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.120 23:02:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.120 23:02:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.120 23:02:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.120 23:02:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.120 23:02:59 -- setup/hugepages.sh@83 -- # : 513 00:03:54.121 23:02:59 -- setup/hugepages.sh@84 -- # : 1 00:03:54.121 23:02:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.121 23:02:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:54.121 23:02:59 -- setup/hugepages.sh@83 -- # : 0 00:03:54.121 23:02:59 -- setup/hugepages.sh@84 -- # : 0 00:03:54.121 23:02:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.121 23:02:59 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:54.121 23:02:59 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:54.121 23:02:59 -- setup/hugepages.sh@160 -- # setup output 00:03:54.121 23:02:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.121 23:02:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:57.412 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.412 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.412 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.412 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.412 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.412 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.412 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.413 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.676 23:03:03 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:57.676 23:03:03 -- setup/hugepages.sh@89 -- # local node 00:03:57.676 23:03:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.676 23:03:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.676 23:03:03 -- setup/hugepages.sh@92 -- # local surp 00:03:57.676 23:03:03 -- setup/hugepages.sh@93 -- # local resv 00:03:57.676 23:03:03 -- setup/hugepages.sh@94 -- # local anon 00:03:57.676 23:03:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.676 23:03:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.676 23:03:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.676 23:03:03 -- setup/common.sh@18 -- # local node= 00:03:57.676 23:03:03 -- setup/common.sh@19 -- # local var val 00:03:57.676 23:03:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.676 23:03:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.676 23:03:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.676 23:03:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.676 23:03:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.676 23:03:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43272800 kB' 'MemAvailable: 47004584 kB' 'Buffers: 4100 kB' 'Cached: 10641988 kB' 'SwapCached: 0 kB' 'Active: 7443812 kB' 'Inactive: 3704024 kB' 'Active(anon): 7046640 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505124 kB' 'Mapped: 219680 kB' 'Shmem: 6544892 kB' 'KReclaimable: 254908 kB' 'Slab: 1182432 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 927524 kB' 'KernelStack: 22112 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8282652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218160 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.677 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.677 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.678 23:03:03 -- setup/common.sh@33 -- # echo 0 00:03:57.678 23:03:03 -- setup/common.sh@33 -- # return 0 00:03:57.678 23:03:03 -- setup/hugepages.sh@97 -- # anon=0 00:03:57.678 23:03:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.678 23:03:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.678 23:03:03 -- setup/common.sh@18 -- # local node= 00:03:57.678 23:03:03 -- setup/common.sh@19 -- # local var val 00:03:57.678 23:03:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.678 23:03:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.678 23:03:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.678 23:03:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.678 23:03:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.678 23:03:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43273316 kB' 'MemAvailable: 47005100 kB' 'Buffers: 4100 kB' 'Cached: 10641992 kB' 'SwapCached: 0 kB' 'Active: 7443544 kB' 'Inactive: 3704024 kB' 'Active(anon): 7046372 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504896 kB' 'Mapped: 219656 kB' 'Shmem: 6544896 kB' 'KReclaimable: 254908 kB' 'Slab: 1182424 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 927516 kB' 'KernelStack: 22112 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8282664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218144 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.678 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.678 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.679 23:03:03 -- setup/common.sh@33 -- # echo 0 00:03:57.679 23:03:03 -- setup/common.sh@33 -- # return 0 00:03:57.679 23:03:03 -- setup/hugepages.sh@99 -- # surp=0 00:03:57.679 23:03:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.679 23:03:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.679 23:03:03 -- setup/common.sh@18 -- # local node= 00:03:57.679 23:03:03 -- setup/common.sh@19 -- # local var val 00:03:57.679 23:03:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.679 23:03:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.679 23:03:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.679 23:03:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.679 23:03:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.679 23:03:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43273480 kB' 'MemAvailable: 47005264 kB' 'Buffers: 4100 kB' 'Cached: 10642000 kB' 'SwapCached: 0 kB' 'Active: 7443572 kB' 'Inactive: 3704024 kB' 'Active(anon): 7046400 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504476 kB' 'Mapped: 219580 kB' 'Shmem: 6544904 kB' 'KReclaimable: 254908 kB' 'Slab: 1182424 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 927516 kB' 'KernelStack: 22096 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8282680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218144 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 23:03:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.680 23:03:03 -- setup/common.sh@33 -- # echo 0 00:03:57.680 23:03:03 -- setup/common.sh@33 -- # return 0 00:03:57.680 23:03:03 -- setup/hugepages.sh@100 -- # resv=0 00:03:57.680 23:03:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:57.680 nr_hugepages=1025 00:03:57.681 23:03:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.681 resv_hugepages=0 00:03:57.681 23:03:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.681 surplus_hugepages=0 00:03:57.681 23:03:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.681 anon_hugepages=0 00:03:57.681 23:03:03 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.681 23:03:03 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:57.681 23:03:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.681 23:03:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.681 23:03:03 -- setup/common.sh@18 -- # local node= 00:03:57.681 23:03:03 -- setup/common.sh@19 -- # local var val 00:03:57.681 23:03:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.681 23:03:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.681 23:03:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.681 23:03:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.681 23:03:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.681 23:03:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43273480 kB' 'MemAvailable: 47005264 kB' 'Buffers: 4100 kB' 'Cached: 10642016 kB' 'SwapCached: 0 kB' 'Active: 7443580 kB' 'Inactive: 3704024 kB' 'Active(anon): 7046408 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504888 kB' 'Mapped: 219580 kB' 'Shmem: 6544920 kB' 'KReclaimable: 254908 kB' 'Slab: 1182424 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 927516 kB' 'KernelStack: 22112 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480904 kB' 'Committed_AS: 8282692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218144 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.681 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.681 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.682 23:03:03 -- setup/common.sh@33 -- # echo 1025 00:03:57.682 23:03:03 -- setup/common.sh@33 -- # return 0 00:03:57.682 23:03:03 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.682 23:03:03 -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.682 23:03:03 -- setup/hugepages.sh@27 -- # local node 00:03:57.682 23:03:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.682 23:03:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.682 23:03:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.682 23:03:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:57.682 23:03:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.682 23:03:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.682 23:03:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.682 23:03:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.682 23:03:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.682 23:03:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.682 23:03:03 -- setup/common.sh@18 -- # local node=0 00:03:57.682 23:03:03 -- setup/common.sh@19 -- # local var val 00:03:57.682 23:03:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.682 23:03:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.682 23:03:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.682 23:03:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.682 23:03:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.682 23:03:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27513012 kB' 'MemUsed: 5072356 kB' 'SwapCached: 0 kB' 'Active: 2191288 kB' 'Inactive: 166356 kB' 'Active(anon): 2065404 kB' 'Inactive(anon): 0 kB' 'Active(file): 125884 kB' 'Inactive(file): 166356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110028 kB' 'Mapped: 111468 kB' 'AnonPages: 250840 kB' 'Shmem: 1817788 kB' 'KernelStack: 11384 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138324 kB' 'Slab: 574280 kB' 'SReclaimable: 138324 kB' 'SUnreclaim: 435956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@33 -- # echo 0 00:03:57.683 23:03:03 -- setup/common.sh@33 -- # return 0 00:03:57.683 23:03:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.683 23:03:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.683 23:03:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.683 23:03:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.683 23:03:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.683 23:03:03 -- setup/common.sh@18 -- # local node=1 00:03:57.683 23:03:03 -- setup/common.sh@19 -- # local var val 00:03:57.683 23:03:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.683 23:03:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.683 23:03:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.683 23:03:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.683 23:03:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.683 23:03:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 15760396 kB' 'MemUsed: 11938040 kB' 'SwapCached: 0 kB' 'Active: 5252640 kB' 'Inactive: 3537668 kB' 'Active(anon): 4981352 kB' 'Inactive(anon): 0 kB' 'Active(file): 271288 kB' 'Inactive(file): 3537668 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8536116 kB' 'Mapped: 108112 kB' 'AnonPages: 254372 kB' 'Shmem: 4727160 kB' 'KernelStack: 10728 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116584 kB' 'Slab: 608144 kB' 'SReclaimable: 116584 kB' 'SUnreclaim: 491560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.683 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.683 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # continue 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.684 23:03:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.684 23:03:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.684 23:03:03 -- setup/common.sh@33 -- # echo 0 00:03:57.684 23:03:03 -- setup/common.sh@33 -- # return 0 00:03:57.684 23:03:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.684 23:03:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.684 23:03:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.684 23:03:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.684 23:03:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:57.684 node0=512 expecting 513 00:03:57.684 23:03:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.684 23:03:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.684 23:03:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.684 23:03:03 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:57.684 node1=513 expecting 512 00:03:57.684 23:03:03 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:57.684 00:03:57.684 real 0m3.695s 00:03:57.684 user 0m1.404s 00:03:57.684 sys 0m2.357s 00:03:57.684 23:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.684 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:03:57.684 ************************************ 00:03:57.684 END TEST odd_alloc 00:03:57.684 ************************************ 00:03:57.684 23:03:03 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:57.684 23:03:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.684 23:03:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.684 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:03:57.944 ************************************ 00:03:57.944 START TEST custom_alloc 00:03:57.944 ************************************ 00:03:57.944 23:03:03 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:57.944 23:03:03 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:57.944 23:03:03 -- setup/hugepages.sh@169 -- # local node 00:03:57.944 23:03:03 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:57.944 23:03:03 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:57.944 23:03:03 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:57.944 23:03:03 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:57.944 23:03:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.944 23:03:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.944 23:03:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.944 23:03:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.944 23:03:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.944 23:03:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.944 23:03:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.944 23:03:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.944 23:03:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.944 23:03:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.944 23:03:03 -- setup/hugepages.sh@83 -- # : 256 00:03:57.944 23:03:03 -- setup/hugepages.sh@84 -- # : 1 00:03:57.944 23:03:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.944 23:03:03 -- setup/hugepages.sh@83 -- # : 0 00:03:57.944 23:03:03 -- setup/hugepages.sh@84 -- # : 0 00:03:57.944 23:03:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:57.944 23:03:03 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:57.944 23:03:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.944 23:03:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.944 23:03:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.944 23:03:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.944 23:03:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.944 23:03:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.944 23:03:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.944 23:03:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.944 23:03:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.944 23:03:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.944 23:03:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.944 23:03:03 -- setup/hugepages.sh@78 -- # return 0 00:03:57.944 23:03:03 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:57.944 23:03:03 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.944 23:03:03 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.944 23:03:03 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.944 23:03:03 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.944 23:03:03 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:57.944 23:03:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.944 23:03:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.944 23:03:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.944 23:03:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.944 23:03:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.944 23:03:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.944 23:03:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:57.944 23:03:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.944 23:03:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.944 23:03:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.944 23:03:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:57.944 23:03:03 -- setup/hugepages.sh@78 -- # return 0 00:03:57.944 23:03:03 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:57.944 23:03:03 -- setup/hugepages.sh@187 -- # setup output 00:03:57.944 23:03:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.944 23:03:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:01.241 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.241 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.241 23:03:06 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:01.241 23:03:06 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.241 23:03:06 -- setup/hugepages.sh@89 -- # local node 00:04:01.241 23:03:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.241 23:03:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.241 23:03:06 -- setup/hugepages.sh@92 -- # local surp 00:04:01.241 23:03:06 -- setup/hugepages.sh@93 -- # local resv 00:04:01.241 23:03:06 -- setup/hugepages.sh@94 -- # local anon 00:04:01.241 23:03:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.241 23:03:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.241 23:03:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.241 23:03:06 -- setup/common.sh@18 -- # local node= 00:04:01.241 23:03:06 -- setup/common.sh@19 -- # local var val 00:04:01.241 23:03:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.241 23:03:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.241 23:03:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.241 23:03:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.241 23:03:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.241 23:03:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42231056 kB' 'MemAvailable: 45962840 kB' 'Buffers: 4100 kB' 'Cached: 10642124 kB' 'SwapCached: 0 kB' 'Active: 7437580 kB' 'Inactive: 3704024 kB' 'Active(anon): 7040408 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498256 kB' 'Mapped: 218792 kB' 'Shmem: 6545028 kB' 'KReclaimable: 254908 kB' 'Slab: 1183852 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928944 kB' 'KernelStack: 22064 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8274148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.241 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.241 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.242 23:03:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.242 23:03:06 -- setup/common.sh@33 -- # echo 0 00:04:01.242 23:03:06 -- setup/common.sh@33 -- # return 0 00:04:01.242 23:03:06 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.242 23:03:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.242 23:03:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.242 23:03:06 -- setup/common.sh@18 -- # local node= 00:04:01.242 23:03:06 -- setup/common.sh@19 -- # local var val 00:04:01.242 23:03:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.242 23:03:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.242 23:03:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.242 23:03:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.242 23:03:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.242 23:03:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.242 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42231792 kB' 'MemAvailable: 45963576 kB' 'Buffers: 4100 kB' 'Cached: 10642124 kB' 'SwapCached: 0 kB' 'Active: 7437288 kB' 'Inactive: 3704024 kB' 'Active(anon): 7040116 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498444 kB' 'Mapped: 218676 kB' 'Shmem: 6545028 kB' 'KReclaimable: 254908 kB' 'Slab: 1183836 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928928 kB' 'KernelStack: 22032 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8274160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.243 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.243 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.244 23:03:06 -- setup/common.sh@33 -- # echo 0 00:04:01.244 23:03:06 -- setup/common.sh@33 -- # return 0 00:04:01.244 23:03:06 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.244 23:03:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.244 23:03:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.244 23:03:06 -- setup/common.sh@18 -- # local node= 00:04:01.244 23:03:06 -- setup/common.sh@19 -- # local var val 00:04:01.244 23:03:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.244 23:03:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.244 23:03:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.244 23:03:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.244 23:03:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.244 23:03:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42232364 kB' 'MemAvailable: 45964148 kB' 'Buffers: 4100 kB' 'Cached: 10642136 kB' 'SwapCached: 0 kB' 'Active: 7437360 kB' 'Inactive: 3704024 kB' 'Active(anon): 7040188 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498448 kB' 'Mapped: 218676 kB' 'Shmem: 6545040 kB' 'KReclaimable: 254908 kB' 'Slab: 1183836 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928928 kB' 'KernelStack: 22032 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8274176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.244 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.244 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.245 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.245 23:03:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.246 23:03:06 -- setup/common.sh@33 -- # echo 0 00:04:01.246 23:03:06 -- setup/common.sh@33 -- # return 0 00:04:01.246 23:03:06 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.246 23:03:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:01.246 nr_hugepages=1536 00:04:01.246 23:03:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.246 resv_hugepages=0 00:04:01.246 23:03:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.246 surplus_hugepages=0 00:04:01.246 23:03:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.246 anon_hugepages=0 00:04:01.246 23:03:06 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.246 23:03:06 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:01.246 23:03:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.246 23:03:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.246 23:03:06 -- setup/common.sh@18 -- # local node= 00:04:01.246 23:03:06 -- setup/common.sh@19 -- # local var val 00:04:01.246 23:03:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.246 23:03:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.246 23:03:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.246 23:03:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.246 23:03:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.246 23:03:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 42232364 kB' 'MemAvailable: 45964148 kB' 'Buffers: 4100 kB' 'Cached: 10642152 kB' 'SwapCached: 0 kB' 'Active: 7437644 kB' 'Inactive: 3704024 kB' 'Active(anon): 7040472 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498732 kB' 'Mapped: 218676 kB' 'Shmem: 6545056 kB' 'KReclaimable: 254908 kB' 'Slab: 1183836 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928928 kB' 'KernelStack: 22032 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957640 kB' 'Committed_AS: 8274188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.246 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.246 23:03:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.247 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.247 23:03:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.248 23:03:06 -- setup/common.sh@33 -- # echo 1536 00:04:01.248 23:03:06 -- setup/common.sh@33 -- # return 0 00:04:01.248 23:03:06 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.248 23:03:06 -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.248 23:03:06 -- setup/hugepages.sh@27 -- # local node 00:04:01.248 23:03:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.248 23:03:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.248 23:03:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.248 23:03:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.248 23:03:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.248 23:03:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.248 23:03:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.248 23:03:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.248 23:03:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.248 23:03:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.248 23:03:06 -- setup/common.sh@18 -- # local node=0 00:04:01.248 23:03:06 -- setup/common.sh@19 -- # local var val 00:04:01.248 23:03:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.248 23:03:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.248 23:03:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.248 23:03:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.248 23:03:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.248 23:03:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27538600 kB' 'MemUsed: 5046768 kB' 'SwapCached: 0 kB' 'Active: 2186416 kB' 'Inactive: 166356 kB' 'Active(anon): 2060532 kB' 'Inactive(anon): 0 kB' 'Active(file): 125884 kB' 'Inactive(file): 166356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110088 kB' 'Mapped: 111472 kB' 'AnonPages: 246088 kB' 'Shmem: 1817848 kB' 'KernelStack: 11400 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138324 kB' 'Slab: 575384 kB' 'SReclaimable: 138324 kB' 'SUnreclaim: 437060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.248 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.248 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@33 -- # echo 0 00:04:01.249 23:03:06 -- setup/common.sh@33 -- # return 0 00:04:01.249 23:03:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.249 23:03:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.249 23:03:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.249 23:03:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.249 23:03:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.249 23:03:06 -- setup/common.sh@18 -- # local node=1 00:04:01.249 23:03:06 -- setup/common.sh@19 -- # local var val 00:04:01.249 23:03:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.249 23:03:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.249 23:03:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.249 23:03:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.249 23:03:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.249 23:03:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698436 kB' 'MemFree: 14693500 kB' 'MemUsed: 13004936 kB' 'SwapCached: 0 kB' 'Active: 5251576 kB' 'Inactive: 3537668 kB' 'Active(anon): 4980288 kB' 'Inactive(anon): 0 kB' 'Active(file): 271288 kB' 'Inactive(file): 3537668 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8536164 kB' 'Mapped: 107204 kB' 'AnonPages: 253112 kB' 'Shmem: 4727208 kB' 'KernelStack: 10664 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116584 kB' 'Slab: 608452 kB' 'SReclaimable: 116584 kB' 'SUnreclaim: 491868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.249 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.249 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.250 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.250 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # continue 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.510 23:03:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.510 23:03:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.510 23:03:06 -- setup/common.sh@33 -- # echo 0 00:04:01.510 23:03:06 -- setup/common.sh@33 -- # return 0 00:04:01.510 23:03:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.510 23:03:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.510 23:03:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.510 23:03:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.510 23:03:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.510 node0=512 expecting 512 00:04:01.510 23:03:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.510 23:03:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.510 23:03:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.510 23:03:06 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:01.510 node1=1024 expecting 1024 00:04:01.510 23:03:06 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:01.510 00:04:01.510 real 0m3.570s 00:04:01.510 user 0m1.279s 00:04:01.510 sys 0m2.322s 00:04:01.510 23:03:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.510 23:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:01.511 ************************************ 00:04:01.511 END TEST custom_alloc 00:04:01.511 ************************************ 00:04:01.511 23:03:07 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.511 23:03:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.511 23:03:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.511 23:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:01.511 ************************************ 00:04:01.511 START TEST no_shrink_alloc 00:04:01.511 ************************************ 00:04:01.511 23:03:07 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:01.511 23:03:07 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.511 23:03:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.511 23:03:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.511 23:03:07 -- setup/hugepages.sh@51 -- # shift 00:04:01.511 23:03:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.511 23:03:07 -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.511 23:03:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.511 23:03:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.511 23:03:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.511 23:03:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.511 23:03:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.511 23:03:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.511 23:03:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.511 23:03:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.511 23:03:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.511 23:03:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.511 23:03:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.511 23:03:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.511 23:03:07 -- setup/hugepages.sh@73 -- # return 0 00:04:01.511 23:03:07 -- setup/hugepages.sh@198 -- # setup output 00:04:01.511 23:03:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.511 23:03:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:04.820 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.820 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.820 23:03:10 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.820 23:03:10 -- setup/hugepages.sh@89 -- # local node 00:04:04.820 23:03:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.820 23:03:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.820 23:03:10 -- setup/hugepages.sh@92 -- # local surp 00:04:04.820 23:03:10 -- setup/hugepages.sh@93 -- # local resv 00:04:04.820 23:03:10 -- setup/hugepages.sh@94 -- # local anon 00:04:04.820 23:03:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.820 23:03:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.820 23:03:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.820 23:03:10 -- setup/common.sh@18 -- # local node= 00:04:04.820 23:03:10 -- setup/common.sh@19 -- # local var val 00:04:04.820 23:03:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.820 23:03:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.820 23:03:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.820 23:03:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.820 23:03:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.820 23:03:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.820 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 23:03:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43288092 kB' 'MemAvailable: 47019876 kB' 'Buffers: 4100 kB' 'Cached: 10642252 kB' 'SwapCached: 0 kB' 'Active: 7440260 kB' 'Inactive: 3704024 kB' 'Active(anon): 7043088 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501224 kB' 'Mapped: 218632 kB' 'Shmem: 6545156 kB' 'KReclaimable: 254908 kB' 'Slab: 1183888 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928980 kB' 'KernelStack: 22128 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8274788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:04.820 23:03:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 23:03:10 -- setup/common.sh@32 -- # continue 00:04:04.820 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 23:03:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 23:03:10 -- setup/common.sh@32 -- # continue 00:04:04.820 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 23:03:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 23:03:10 -- setup/common.sh@32 -- # continue 00:04:04.821 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.083 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.083 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.084 23:03:10 -- setup/common.sh@33 -- # echo 0 00:04:05.084 23:03:10 -- setup/common.sh@33 -- # return 0 00:04:05.084 23:03:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.084 23:03:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.084 23:03:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.084 23:03:10 -- setup/common.sh@18 -- # local node= 00:04:05.084 23:03:10 -- setup/common.sh@19 -- # local var val 00:04:05.084 23:03:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.084 23:03:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.084 23:03:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.084 23:03:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.084 23:03:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.084 23:03:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43290604 kB' 'MemAvailable: 47022388 kB' 'Buffers: 4100 kB' 'Cached: 10642256 kB' 'SwapCached: 0 kB' 'Active: 7440004 kB' 'Inactive: 3704024 kB' 'Active(anon): 7042832 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501064 kB' 'Mapped: 218632 kB' 'Shmem: 6545160 kB' 'KReclaimable: 254908 kB' 'Slab: 1183888 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928980 kB' 'KernelStack: 22096 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8274432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.084 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.084 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.085 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.085 23:03:10 -- setup/common.sh@33 -- # echo 0 00:04:05.085 23:03:10 -- setup/common.sh@33 -- # return 0 00:04:05.085 23:03:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.085 23:03:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.085 23:03:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.085 23:03:10 -- setup/common.sh@18 -- # local node= 00:04:05.085 23:03:10 -- setup/common.sh@19 -- # local var val 00:04:05.085 23:03:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.085 23:03:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.085 23:03:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.085 23:03:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.085 23:03:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.085 23:03:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.085 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43290916 kB' 'MemAvailable: 47022700 kB' 'Buffers: 4100 kB' 'Cached: 10642260 kB' 'SwapCached: 0 kB' 'Active: 7438764 kB' 'Inactive: 3704024 kB' 'Active(anon): 7041592 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499748 kB' 'Mapped: 218632 kB' 'Shmem: 6545164 kB' 'KReclaimable: 254908 kB' 'Slab: 1183888 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928980 kB' 'KernelStack: 22016 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8274452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.086 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.086 23:03:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.087 23:03:10 -- setup/common.sh@33 -- # echo 0 00:04:05.087 23:03:10 -- setup/common.sh@33 -- # return 0 00:04:05.087 23:03:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.087 23:03:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.087 nr_hugepages=1024 00:04:05.087 23:03:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.087 resv_hugepages=0 00:04:05.087 23:03:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.087 surplus_hugepages=0 00:04:05.087 23:03:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.087 anon_hugepages=0 00:04:05.087 23:03:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.087 23:03:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.087 23:03:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.087 23:03:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.087 23:03:10 -- setup/common.sh@18 -- # local node= 00:04:05.087 23:03:10 -- setup/common.sh@19 -- # local var val 00:04:05.087 23:03:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.087 23:03:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.087 23:03:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.087 23:03:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.087 23:03:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.087 23:03:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43292092 kB' 'MemAvailable: 47023876 kB' 'Buffers: 4100 kB' 'Cached: 10642284 kB' 'SwapCached: 0 kB' 'Active: 7438560 kB' 'Inactive: 3704024 kB' 'Active(anon): 7041388 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499544 kB' 'Mapped: 218632 kB' 'Shmem: 6545188 kB' 'KReclaimable: 254908 kB' 'Slab: 1184024 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 929116 kB' 'KernelStack: 22016 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8274596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.087 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.087 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.088 23:03:10 -- setup/common.sh@33 -- # echo 1024 00:04:05.088 23:03:10 -- setup/common.sh@33 -- # return 0 00:04:05.088 23:03:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.088 23:03:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.088 23:03:10 -- setup/hugepages.sh@27 -- # local node 00:04:05.088 23:03:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.088 23:03:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.088 23:03:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.088 23:03:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.088 23:03:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.088 23:03:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.088 23:03:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.088 23:03:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.088 23:03:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.088 23:03:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.088 23:03:10 -- setup/common.sh@18 -- # local node=0 00:04:05.088 23:03:10 -- setup/common.sh@19 -- # local var val 00:04:05.088 23:03:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.088 23:03:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.088 23:03:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.088 23:03:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.089 23:03:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.089 23:03:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26489484 kB' 'MemUsed: 6095884 kB' 'SwapCached: 0 kB' 'Active: 2185740 kB' 'Inactive: 166356 kB' 'Active(anon): 2059856 kB' 'Inactive(anon): 0 kB' 'Active(file): 125884 kB' 'Inactive(file): 166356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110176 kB' 'Mapped: 111480 kB' 'AnonPages: 245080 kB' 'Shmem: 1817936 kB' 'KernelStack: 11288 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138324 kB' 'Slab: 575568 kB' 'SReclaimable: 138324 kB' 'SUnreclaim: 437244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # continue 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 23:03:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 23:03:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 23:03:10 -- setup/common.sh@33 -- # echo 0 00:04:05.089 23:03:10 -- setup/common.sh@33 -- # return 0 00:04:05.089 23:03:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.089 23:03:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.090 23:03:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.090 23:03:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.090 23:03:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.090 node0=1024 expecting 1024 00:04:05.090 23:03:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.090 23:03:10 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.090 23:03:10 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.090 23:03:10 -- setup/hugepages.sh@202 -- # setup output 00:04:05.090 23:03:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.090 23:03:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:08.454 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.454 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.454 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.717 23:03:14 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.717 23:03:14 -- setup/hugepages.sh@89 -- # local node 00:04:08.717 23:03:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.717 23:03:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.717 23:03:14 -- setup/hugepages.sh@92 -- # local surp 00:04:08.717 23:03:14 -- setup/hugepages.sh@93 -- # local resv 00:04:08.717 23:03:14 -- setup/hugepages.sh@94 -- # local anon 00:04:08.717 23:03:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.717 23:03:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.717 23:03:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.717 23:03:14 -- setup/common.sh@18 -- # local node= 00:04:08.717 23:03:14 -- setup/common.sh@19 -- # local var val 00:04:08.717 23:03:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.717 23:03:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.717 23:03:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.717 23:03:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.717 23:03:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.717 23:03:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43318836 kB' 'MemAvailable: 47050620 kB' 'Buffers: 4100 kB' 'Cached: 10642376 kB' 'SwapCached: 0 kB' 'Active: 7440100 kB' 'Inactive: 3704024 kB' 'Active(anon): 7042928 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500444 kB' 'Mapped: 219356 kB' 'Shmem: 6545280 kB' 'KReclaimable: 254908 kB' 'Slab: 1183792 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928884 kB' 'KernelStack: 21936 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8277220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.717 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.717 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.718 23:03:14 -- setup/common.sh@33 -- # echo 0 00:04:08.718 23:03:14 -- setup/common.sh@33 -- # return 0 00:04:08.718 23:03:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.718 23:03:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.718 23:03:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.718 23:03:14 -- setup/common.sh@18 -- # local node= 00:04:08.718 23:03:14 -- setup/common.sh@19 -- # local var val 00:04:08.718 23:03:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.718 23:03:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.718 23:03:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.718 23:03:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.718 23:03:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.718 23:03:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43315580 kB' 'MemAvailable: 47047364 kB' 'Buffers: 4100 kB' 'Cached: 10642384 kB' 'SwapCached: 0 kB' 'Active: 7443916 kB' 'Inactive: 3704024 kB' 'Active(anon): 7046744 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504204 kB' 'Mapped: 219716 kB' 'Shmem: 6545288 kB' 'KReclaimable: 254908 kB' 'Slab: 1183784 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928876 kB' 'KernelStack: 21984 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8297720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217936 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.718 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.718 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.719 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.719 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.719 23:03:14 -- setup/common.sh@33 -- # echo 0 00:04:08.719 23:03:14 -- setup/common.sh@33 -- # return 0 00:04:08.719 23:03:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.719 23:03:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.719 23:03:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.719 23:03:14 -- setup/common.sh@18 -- # local node= 00:04:08.719 23:03:14 -- setup/common.sh@19 -- # local var val 00:04:08.719 23:03:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.719 23:03:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.720 23:03:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.720 23:03:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.720 23:03:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.720 23:03:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43316724 kB' 'MemAvailable: 47048508 kB' 'Buffers: 4100 kB' 'Cached: 10642396 kB' 'SwapCached: 0 kB' 'Active: 7442220 kB' 'Inactive: 3704024 kB' 'Active(anon): 7045048 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502992 kB' 'Mapped: 219220 kB' 'Shmem: 6545300 kB' 'KReclaimable: 254908 kB' 'Slab: 1183744 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928836 kB' 'KernelStack: 22016 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8280260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.720 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.720 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.721 23:03:14 -- setup/common.sh@33 -- # echo 0 00:04:08.721 23:03:14 -- setup/common.sh@33 -- # return 0 00:04:08.721 23:03:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.721 23:03:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.721 nr_hugepages=1024 00:04:08.721 23:03:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.721 resv_hugepages=0 00:04:08.721 23:03:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.721 surplus_hugepages=0 00:04:08.721 23:03:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.721 anon_hugepages=0 00:04:08.721 23:03:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.721 23:03:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.721 23:03:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.721 23:03:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.721 23:03:14 -- setup/common.sh@18 -- # local node= 00:04:08.721 23:03:14 -- setup/common.sh@19 -- # local var val 00:04:08.721 23:03:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.721 23:03:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.721 23:03:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.721 23:03:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.721 23:03:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.721 23:03:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283804 kB' 'MemFree: 43311936 kB' 'MemAvailable: 47043720 kB' 'Buffers: 4100 kB' 'Cached: 10642408 kB' 'SwapCached: 0 kB' 'Active: 7438640 kB' 'Inactive: 3704024 kB' 'Active(anon): 7041468 kB' 'Inactive(anon): 0 kB' 'Active(file): 397172 kB' 'Inactive(file): 3704024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499468 kB' 'Mapped: 219108 kB' 'Shmem: 6545312 kB' 'KReclaimable: 254908 kB' 'Slab: 1183744 kB' 'SReclaimable: 254908 kB' 'SUnreclaim: 928836 kB' 'KernelStack: 22032 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481928 kB' 'Committed_AS: 8275748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 81536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2573684 kB' 'DirectMap2M: 28569600 kB' 'DirectMap1G: 38797312 kB' 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.721 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.721 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.722 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.722 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.722 23:03:14 -- setup/common.sh@33 -- # echo 1024 00:04:08.722 23:03:14 -- setup/common.sh@33 -- # return 0 00:04:08.722 23:03:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.722 23:03:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.723 23:03:14 -- setup/hugepages.sh@27 -- # local node 00:04:08.723 23:03:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.723 23:03:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.723 23:03:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.723 23:03:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.723 23:03:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.723 23:03:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.723 23:03:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.723 23:03:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.723 23:03:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.723 23:03:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.723 23:03:14 -- setup/common.sh@18 -- # local node=0 00:04:08.723 23:03:14 -- setup/common.sh@19 -- # local var val 00:04:08.723 23:03:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.723 23:03:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.723 23:03:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.723 23:03:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.723 23:03:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.723 23:03:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.723 23:03:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26495108 kB' 'MemUsed: 6090260 kB' 'SwapCached: 0 kB' 'Active: 2185236 kB' 'Inactive: 166356 kB' 'Active(anon): 2059352 kB' 'Inactive(anon): 0 kB' 'Active(file): 125884 kB' 'Inactive(file): 166356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110216 kB' 'Mapped: 111492 kB' 'AnonPages: 244616 kB' 'Shmem: 1817976 kB' 'KernelStack: 11304 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138324 kB' 'Slab: 575272 kB' 'SReclaimable: 138324 kB' 'SUnreclaim: 436948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.723 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.723 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.724 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.724 23:03:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.724 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.724 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.724 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.724 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.724 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.724 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.724 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.724 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.724 23:03:14 -- setup/common.sh@32 -- # continue 00:04:08.724 23:03:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.724 23:03:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.724 23:03:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.724 23:03:14 -- setup/common.sh@33 -- # echo 0 00:04:08.724 23:03:14 -- setup/common.sh@33 -- # return 0 00:04:08.724 23:03:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.724 23:03:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.724 23:03:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.724 23:03:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.724 23:03:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.724 node0=1024 expecting 1024 00:04:08.724 23:03:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.724 00:04:08.724 real 0m7.293s 00:04:08.724 user 0m2.676s 00:04:08.724 sys 0m4.730s 00:04:08.724 23:03:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.724 23:03:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.724 ************************************ 00:04:08.724 END TEST no_shrink_alloc 00:04:08.724 ************************************ 00:04:08.724 23:03:14 -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.724 23:03:14 -- setup/hugepages.sh@37 -- # local node hp 00:04:08.724 23:03:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.724 23:03:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.724 23:03:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.724 23:03:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.724 23:03:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.724 23:03:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.724 23:03:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.724 23:03:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.724 23:03:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.724 23:03:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.724 23:03:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.724 23:03:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.724 00:04:08.724 real 0m28.099s 00:04:08.724 user 0m9.690s 00:04:08.724 sys 0m16.954s 00:04:08.724 23:03:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.724 23:03:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.724 ************************************ 00:04:08.724 END TEST hugepages 00:04:08.724 ************************************ 00:04:08.724 23:03:14 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:08.724 23:03:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.724 23:03:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.724 23:03:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.724 ************************************ 00:04:08.724 START TEST driver 00:04:08.724 ************************************ 00:04:08.724 23:03:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:08.983 * Looking for test storage... 00:04:08.983 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:08.983 23:03:14 -- setup/driver.sh@68 -- # setup reset 00:04:08.983 23:03:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.983 23:03:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.261 23:03:19 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.261 23:03:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.261 23:03:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.261 23:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:14.261 ************************************ 00:04:14.261 START TEST guess_driver 00:04:14.261 ************************************ 00:04:14.261 23:03:19 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:14.261 23:03:19 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.261 23:03:19 -- setup/driver.sh@47 -- # local fail=0 00:04:14.261 23:03:19 -- setup/driver.sh@49 -- # pick_driver 00:04:14.261 23:03:19 -- setup/driver.sh@36 -- # vfio 00:04:14.261 23:03:19 -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.261 23:03:19 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.261 23:03:19 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.261 23:03:19 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:14.261 23:03:19 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.261 23:03:19 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:14.261 23:03:19 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:14.261 23:03:19 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:14.261 23:03:19 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:14.261 23:03:19 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:14.261 23:03:19 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:14.261 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.261 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.261 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.261 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.261 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:14.261 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:14.261 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:14.261 23:03:19 -- setup/driver.sh@30 -- # return 0 00:04:14.261 23:03:19 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:14.261 23:03:19 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:14.261 23:03:19 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.261 23:03:19 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:14.261 Looking for driver=vfio-pci 00:04:14.261 23:03:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.261 23:03:19 -- setup/driver.sh@45 -- # setup output config 00:04:14.261 23:03:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.261 23:03:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 23:03:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 23:03:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.554 23:03:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.461 23:03:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.461 23:03:24 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.461 23:03:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.461 23:03:24 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:19.461 23:03:24 -- setup/driver.sh@65 -- # setup reset 00:04:19.461 23:03:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.461 23:03:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.739 00:04:24.740 real 0m10.140s 00:04:24.740 user 0m2.526s 00:04:24.740 sys 0m5.021s 00:04:24.740 23:03:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.740 23:03:29 -- common/autotest_common.sh@10 -- # set +x 00:04:24.740 ************************************ 00:04:24.740 END TEST guess_driver 00:04:24.740 ************************************ 00:04:24.740 00:04:24.740 real 0m15.174s 00:04:24.740 user 0m3.975s 00:04:24.740 sys 0m7.839s 00:04:24.740 23:03:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.740 23:03:29 -- common/autotest_common.sh@10 -- # set +x 00:04:24.740 ************************************ 00:04:24.740 END TEST driver 00:04:24.740 ************************************ 00:04:24.740 23:03:29 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:24.740 23:03:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.740 23:03:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.740 23:03:29 -- common/autotest_common.sh@10 -- # set +x 00:04:24.740 ************************************ 00:04:24.740 START TEST devices 00:04:24.740 ************************************ 00:04:24.740 23:03:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:24.740 * Looking for test storage... 00:04:24.740 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:24.740 23:03:29 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:24.740 23:03:29 -- setup/devices.sh@192 -- # setup reset 00:04:24.740 23:03:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.740 23:03:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.033 23:03:33 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:28.033 23:03:33 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:28.033 23:03:33 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:28.033 23:03:33 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:28.033 23:03:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:28.033 23:03:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:28.033 23:03:33 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:28.033 23:03:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.033 23:03:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:28.033 23:03:33 -- setup/devices.sh@196 -- # blocks=() 00:04:28.033 23:03:33 -- setup/devices.sh@196 -- # declare -a blocks 00:04:28.033 23:03:33 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:28.033 23:03:33 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:28.033 23:03:33 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:28.033 23:03:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.033 23:03:33 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:28.033 23:03:33 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:28.033 23:03:33 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:28.033 23:03:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:28.033 23:03:33 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:28.033 23:03:33 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:28.033 23:03:33 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:28.033 No valid GPT data, bailing 00:04:28.033 23:03:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.033 23:03:33 -- scripts/common.sh@393 -- # pt= 00:04:28.033 23:03:33 -- scripts/common.sh@394 -- # return 1 00:04:28.033 23:03:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:28.033 23:03:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:28.033 23:03:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:28.033 23:03:33 -- setup/common.sh@80 -- # echo 2000398934016 00:04:28.033 23:03:33 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:28.033 23:03:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.033 23:03:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:28.033 23:03:33 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:28.033 23:03:33 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:28.033 23:03:33 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:28.033 23:03:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.033 23:03:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.033 23:03:33 -- common/autotest_common.sh@10 -- # set +x 00:04:28.033 ************************************ 00:04:28.033 START TEST nvme_mount 00:04:28.033 ************************************ 00:04:28.033 23:03:33 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:28.033 23:03:33 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:28.033 23:03:33 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:28.033 23:03:33 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.033 23:03:33 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.033 23:03:33 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:28.033 23:03:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:28.033 23:03:33 -- setup/common.sh@40 -- # local part_no=1 00:04:28.033 23:03:33 -- setup/common.sh@41 -- # local size=1073741824 00:04:28.033 23:03:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.033 23:03:33 -- setup/common.sh@44 -- # parts=() 00:04:28.033 23:03:33 -- setup/common.sh@44 -- # local parts 00:04:28.033 23:03:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.033 23:03:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.033 23:03:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.033 23:03:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:28.033 23:03:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.033 23:03:33 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:28.033 23:03:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:28.033 23:03:33 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:29.071 Creating new GPT entries in memory. 00:04:29.071 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.071 other utilities. 00:04:29.071 23:03:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.071 23:03:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.071 23:03:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.071 23:03:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.071 23:03:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:30.007 Creating new GPT entries in memory. 00:04:30.007 The operation has completed successfully. 00:04:30.007 23:03:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:30.007 23:03:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.007 23:03:35 -- setup/common.sh@62 -- # wait 424255 00:04:30.007 23:03:35 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.007 23:03:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:30.007 23:03:35 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.007 23:03:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:30.007 23:03:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:30.007 23:03:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.007 23:03:35 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.007 23:03:35 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:30.007 23:03:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:30.007 23:03:35 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.007 23:03:35 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.007 23:03:35 -- setup/devices.sh@53 -- # local found=0 00:04:30.007 23:03:35 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.007 23:03:35 -- setup/devices.sh@56 -- # : 00:04:30.007 23:03:35 -- setup/devices.sh@59 -- # local pci status 00:04:30.007 23:03:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 23:03:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:30.007 23:03:35 -- setup/devices.sh@47 -- # setup output config 00:04:30.007 23:03:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.007 23:03:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:33.301 23:03:38 -- setup/devices.sh@63 -- # found=1 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.301 23:03:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.301 23:03:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.301 23:03:39 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:33.301 23:03:39 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.301 23:03:39 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.301 23:03:39 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.301 23:03:39 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:33.301 23:03:39 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.301 23:03:39 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.301 23:03:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.301 23:03:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.561 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.561 23:03:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.561 23:03:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.821 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:33.821 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:33.821 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.821 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.821 23:03:39 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:33.821 23:03:39 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:33.821 23:03:39 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.821 23:03:39 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:33.821 23:03:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:33.821 23:03:39 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.821 23:03:39 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.821 23:03:39 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:33.821 23:03:39 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:33.821 23:03:39 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.821 23:03:39 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.821 23:03:39 -- setup/devices.sh@53 -- # local found=0 00:04:33.821 23:03:39 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.821 23:03:39 -- setup/devices.sh@56 -- # : 00:04:33.821 23:03:39 -- setup/devices.sh@59 -- # local pci status 00:04:33.821 23:03:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.821 23:03:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:33.821 23:03:39 -- setup/devices.sh@47 -- # setup output config 00:04:33.821 23:03:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.821 23:03:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:37.114 23:03:42 -- setup/devices.sh@63 -- # found=1 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.114 23:03:42 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:37.114 23:03:42 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.114 23:03:42 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.114 23:03:42 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.114 23:03:42 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.114 23:03:42 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:37.114 23:03:42 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:37.114 23:03:42 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:37.114 23:03:42 -- setup/devices.sh@50 -- # local mount_point= 00:04:37.114 23:03:42 -- setup/devices.sh@51 -- # local test_file= 00:04:37.114 23:03:42 -- setup/devices.sh@53 -- # local found=0 00:04:37.114 23:03:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:37.114 23:03:42 -- setup/devices.sh@59 -- # local pci status 00:04:37.114 23:03:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.114 23:03:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:37.114 23:03:42 -- setup/devices.sh@47 -- # setup output config 00:04:37.114 23:03:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.114 23:03:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:40.408 23:03:45 -- setup/devices.sh@63 -- # found=1 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.408 23:03:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.408 23:03:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.668 23:03:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.668 23:03:46 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.668 23:03:46 -- setup/devices.sh@68 -- # return 0 00:04:40.668 23:03:46 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.668 23:03:46 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.668 23:03:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.668 23:03:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.668 23:03:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.668 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.668 00:04:40.668 real 0m12.787s 00:04:40.668 user 0m3.698s 00:04:40.668 sys 0m7.027s 00:04:40.668 23:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.668 23:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.668 ************************************ 00:04:40.668 END TEST nvme_mount 00:04:40.668 ************************************ 00:04:40.668 23:03:46 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.668 23:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.668 23:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.668 23:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.668 ************************************ 00:04:40.668 START TEST dm_mount 00:04:40.668 ************************************ 00:04:40.668 23:03:46 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:40.668 23:03:46 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.668 23:03:46 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.668 23:03:46 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.668 23:03:46 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.668 23:03:46 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.668 23:03:46 -- setup/common.sh@40 -- # local part_no=2 00:04:40.668 23:03:46 -- setup/common.sh@41 -- # local size=1073741824 00:04:40.668 23:03:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.668 23:03:46 -- setup/common.sh@44 -- # parts=() 00:04:40.668 23:03:46 -- setup/common.sh@44 -- # local parts 00:04:40.668 23:03:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.668 23:03:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.668 23:03:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.668 23:03:46 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.668 23:03:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.668 23:03:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.668 23:03:46 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.668 23:03:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.668 23:03:46 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.668 23:03:46 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.668 23:03:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.607 Creating new GPT entries in memory. 00:04:41.607 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.607 other utilities. 00:04:41.607 23:03:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.607 23:03:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.607 23:03:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.607 23:03:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.607 23:03:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.546 Creating new GPT entries in memory. 00:04:42.546 The operation has completed successfully. 00:04:42.546 23:03:48 -- setup/common.sh@57 -- # (( part++ )) 00:04:42.546 23:03:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.546 23:03:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.546 23:03:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.546 23:03:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:43.926 The operation has completed successfully. 00:04:43.926 23:03:49 -- setup/common.sh@57 -- # (( part++ )) 00:04:43.926 23:03:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.926 23:03:49 -- setup/common.sh@62 -- # wait 428772 00:04:43.926 23:03:49 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:43.926 23:03:49 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:43.926 23:03:49 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.926 23:03:49 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:43.926 23:03:49 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:43.926 23:03:49 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.926 23:03:49 -- setup/devices.sh@161 -- # break 00:04:43.926 23:03:49 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.926 23:03:49 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:43.926 23:03:49 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:43.926 23:03:49 -- setup/devices.sh@166 -- # dm=dm-2 00:04:43.926 23:03:49 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:43.926 23:03:49 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:43.926 23:03:49 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:43.926 23:03:49 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:43.927 23:03:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:43.927 23:03:49 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.927 23:03:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:43.927 23:03:49 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:43.927 23:03:49 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.927 23:03:49 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:43.927 23:03:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:43.927 23:03:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:43.927 23:03:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.927 23:03:49 -- setup/devices.sh@53 -- # local found=0 00:04:43.927 23:03:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.927 23:03:49 -- setup/devices.sh@56 -- # : 00:04:43.927 23:03:49 -- setup/devices.sh@59 -- # local pci status 00:04:43.927 23:03:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.927 23:03:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:43.927 23:03:49 -- setup/devices.sh@47 -- # setup output config 00:04:43.927 23:03:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.927 23:03:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:47.221 23:03:52 -- setup/devices.sh@63 -- # found=1 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.221 23:03:52 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.221 23:03:52 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:47.221 23:03:52 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.221 23:03:52 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.221 23:03:52 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:47.221 23:03:52 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:47.221 23:03:52 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:47.221 23:03:52 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:47.221 23:03:52 -- setup/devices.sh@50 -- # local mount_point= 00:04:47.221 23:03:52 -- setup/devices.sh@51 -- # local test_file= 00:04:47.221 23:03:52 -- setup/devices.sh@53 -- # local found=0 00:04:47.221 23:03:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.221 23:03:52 -- setup/devices.sh@59 -- # local pci status 00:04:47.221 23:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.221 23:03:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:47.221 23:03:52 -- setup/devices.sh@47 -- # setup output config 00:04:47.221 23:03:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.221 23:03:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:50.603 23:03:55 -- setup/devices.sh@63 -- # found=1 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.603 23:03:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.603 23:03:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.603 23:03:56 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.603 23:03:56 -- setup/devices.sh@68 -- # return 0 00:04:50.603 23:03:56 -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.603 23:03:56 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:50.603 23:03:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.603 23:03:56 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.603 23:03:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.603 23:03:56 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.603 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.603 23:03:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.603 23:03:56 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.603 00:04:50.603 real 0m9.878s 00:04:50.603 user 0m2.432s 00:04:50.603 sys 0m4.539s 00:04:50.603 23:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.603 23:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.603 ************************************ 00:04:50.603 END TEST dm_mount 00:04:50.603 ************************************ 00:04:50.603 23:03:56 -- setup/devices.sh@1 -- # cleanup 00:04:50.603 23:03:56 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.603 23:03:56 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.604 23:03:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.604 23:03:56 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.604 23:03:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.604 23:03:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.863 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.863 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.863 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.863 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.863 23:03:56 -- setup/devices.sh@12 -- # cleanup_dm 00:04:50.863 23:03:56 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:50.863 23:03:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.863 23:03:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.863 23:03:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.863 23:03:56 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.863 23:03:56 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:50.863 00:04:50.863 real 0m26.782s 00:04:50.863 user 0m7.579s 00:04:50.863 sys 0m14.155s 00:04:50.863 23:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.863 23:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.863 ************************************ 00:04:50.863 END TEST devices 00:04:50.863 ************************************ 00:04:50.863 00:04:50.863 real 1m34.644s 00:04:50.863 user 0m28.634s 00:04:50.863 sys 0m53.773s 00:04:50.863 23:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.863 23:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.863 ************************************ 00:04:50.863 END TEST setup.sh 00:04:50.863 ************************************ 00:04:50.863 23:03:56 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:54.155 Hugepages 00:04:54.155 node hugesize free / total 00:04:54.155 node0 1048576kB 0 / 0 00:04:54.155 node0 2048kB 2048 / 2048 00:04:54.155 node1 1048576kB 0 / 0 00:04:54.155 node1 2048kB 0 / 0 00:04:54.155 00:04:54.155 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.155 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:54.155 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:54.155 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:54.155 23:03:59 -- spdk/autotest.sh@141 -- # uname -s 00:04:54.155 23:03:59 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:54.155 23:03:59 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:54.155 23:03:59 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:57.445 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.445 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.445 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.704 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.610 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.869 23:04:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:00.808 23:04:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:00.808 23:04:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:00.808 23:04:06 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.808 23:04:06 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:00.808 23:04:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:00.808 23:04:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:00.808 23:04:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.808 23:04:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.808 23:04:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:00.808 23:04:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:00.808 23:04:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:01.067 23:04:06 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.359 Waiting for block devices as requested 00:05:04.359 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:04.359 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:04.359 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:04.359 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:04.359 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:04.618 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:04.618 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:04.618 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:04.618 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:04.877 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:04.877 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:04.877 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.137 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:05.137 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:05.137 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:05.396 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:05.396 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:05.655 23:04:11 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:05.655 23:04:11 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:05.655 23:04:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:05.655 23:04:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:05.655 23:04:11 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:05.655 23:04:11 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:05.655 23:04:11 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:05:05.655 23:04:11 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:05.655 23:04:11 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:05.655 23:04:11 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:05.655 23:04:11 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:05.655 23:04:11 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:05.655 23:04:11 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:05.655 23:04:11 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:05.655 23:04:11 -- common/autotest_common.sh@1542 -- # continue 00:05:05.655 23:04:11 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:05.655 23:04:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:05.655 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 23:04:11 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:05.655 23:04:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:05.655 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:05.655 23:04:11 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:08.946 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.946 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.853 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:10.853 23:04:16 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:10.853 23:04:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:10.853 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:05:10.853 23:04:16 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:10.853 23:04:16 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:10.853 23:04:16 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.853 23:04:16 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:10.853 23:04:16 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:10.853 23:04:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:10.853 23:04:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:10.853 23:04:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:10.853 23:04:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.853 23:04:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:10.853 23:04:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.112 23:04:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:11.112 23:04:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:11.112 23:04:16 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:11.112 23:04:16 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:11.112 23:04:16 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:11.112 23:04:16 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.112 23:04:16 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:11.112 23:04:16 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:05:11.112 23:04:16 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:05:11.112 23:04:16 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=438469 00:05:11.112 23:04:16 -- common/autotest_common.sh@1583 -- # waitforlisten 438469 00:05:11.112 23:04:16 -- common/autotest_common.sh@819 -- # '[' -z 438469 ']' 00:05:11.112 23:04:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.112 23:04:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.112 23:04:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.112 23:04:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.112 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.112 23:04:16 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.112 [2024-11-02 23:04:16.679506] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:11.112 [2024-11-02 23:04:16.679551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438469 ] 00:05:11.112 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.112 [2024-11-02 23:04:16.749065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.112 [2024-11-02 23:04:16.822087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.112 [2024-11-02 23:04:16.822202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.050 23:04:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.050 23:04:17 -- common/autotest_common.sh@852 -- # return 0 00:05:12.050 23:04:17 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:12.050 23:04:17 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:12.050 23:04:17 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:15.339 nvme0n1 00:05:15.339 23:04:20 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:15.339 [2024-11-02 23:04:20.625743] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:15.339 request: 00:05:15.339 { 00:05:15.339 "nvme_ctrlr_name": "nvme0", 00:05:15.339 "password": "test", 00:05:15.339 "method": "bdev_nvme_opal_revert", 00:05:15.339 "req_id": 1 00:05:15.339 } 00:05:15.339 Got JSON-RPC error response 00:05:15.339 response: 00:05:15.339 { 00:05:15.339 "code": -32602, 00:05:15.339 "message": "Invalid parameters" 00:05:15.339 } 00:05:15.339 23:04:20 -- common/autotest_common.sh@1589 -- # true 00:05:15.339 23:04:20 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:15.339 23:04:20 -- common/autotest_common.sh@1593 -- # killprocess 438469 00:05:15.339 23:04:20 -- common/autotest_common.sh@926 -- # '[' -z 438469 ']' 00:05:15.339 23:04:20 -- common/autotest_common.sh@930 -- # kill -0 438469 00:05:15.339 23:04:20 -- common/autotest_common.sh@931 -- # uname 00:05:15.339 23:04:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:15.339 23:04:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 438469 00:05:15.339 23:04:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:15.339 23:04:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:15.339 23:04:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 438469' 00:05:15.339 killing process with pid 438469 00:05:15.339 23:04:20 -- common/autotest_common.sh@945 -- # kill 438469 00:05:15.339 23:04:20 -- common/autotest_common.sh@950 -- # wait 438469 00:05:17.874 23:04:23 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:17.874 23:04:23 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:17.874 23:04:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:17.874 23:04:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:17.874 23:04:23 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:17.874 23:04:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:17.874 23:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 23:04:23 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:17.874 23:04:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.874 23:04:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.874 23:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 ************************************ 00:05:17.874 START TEST env 00:05:17.874 ************************************ 00:05:17.874 23:04:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:17.874 * Looking for test storage... 00:05:17.874 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:17.874 23:04:23 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:17.874 23:04:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.874 23:04:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.874 23:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 ************************************ 00:05:17.874 START TEST env_memory 00:05:17.874 ************************************ 00:05:17.874 23:04:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:17.874 00:05:17.874 00:05:17.874 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.874 http://cunit.sourceforge.net/ 00:05:17.874 00:05:17.874 00:05:17.874 Suite: memory 00:05:17.874 Test: alloc and free memory map ...[2024-11-02 23:04:23.423283] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:17.874 passed 00:05:17.874 Test: mem map translation ...[2024-11-02 23:04:23.441326] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:17.874 [2024-11-02 23:04:23.441344] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:17.874 [2024-11-02 23:04:23.441380] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:17.874 [2024-11-02 23:04:23.441389] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:17.874 passed 00:05:17.874 Test: mem map registration ...[2024-11-02 23:04:23.476298] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:17.874 [2024-11-02 23:04:23.476315] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:17.874 passed 00:05:17.874 Test: mem map adjacent registrations ...passed 00:05:17.874 00:05:17.874 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.874 suites 1 1 n/a 0 0 00:05:17.874 tests 4 4 4 0 0 00:05:17.874 asserts 152 152 152 0 n/a 00:05:17.874 00:05:17.874 Elapsed time = 0.131 seconds 00:05:17.874 00:05:17.874 real 0m0.145s 00:05:17.874 user 0m0.131s 00:05:17.874 sys 0m0.013s 00:05:17.874 23:04:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.874 23:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 ************************************ 00:05:17.874 END TEST env_memory 00:05:17.874 ************************************ 00:05:17.874 23:04:23 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:17.874 23:04:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.874 23:04:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.874 23:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 ************************************ 00:05:17.874 START TEST env_vtophys 00:05:17.874 ************************************ 00:05:17.874 23:04:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:17.874 EAL: lib.eal log level changed from notice to debug 00:05:17.874 EAL: Detected lcore 0 as core 0 on socket 0 00:05:17.874 EAL: Detected lcore 1 as core 1 on socket 0 00:05:17.874 EAL: Detected lcore 2 as core 2 on socket 0 00:05:17.874 EAL: Detected lcore 3 as core 3 on socket 0 00:05:17.874 EAL: Detected lcore 4 as core 4 on socket 0 00:05:17.874 EAL: Detected lcore 5 as core 5 on socket 0 00:05:17.874 EAL: Detected lcore 6 as core 6 on socket 0 00:05:17.874 EAL: Detected lcore 7 as core 8 on socket 0 00:05:17.874 EAL: Detected lcore 8 as core 9 on socket 0 00:05:17.875 EAL: Detected lcore 9 as core 10 on socket 0 00:05:17.875 EAL: Detected lcore 10 as core 11 on socket 0 00:05:17.875 EAL: Detected lcore 11 as core 12 on socket 0 00:05:17.875 EAL: Detected lcore 12 as core 13 on socket 0 00:05:17.875 EAL: Detected lcore 13 as core 14 on socket 0 00:05:17.875 EAL: Detected lcore 14 as core 16 on socket 0 00:05:17.875 EAL: Detected lcore 15 as core 17 on socket 0 00:05:17.875 EAL: Detected lcore 16 as core 18 on socket 0 00:05:17.875 EAL: Detected lcore 17 as core 19 on socket 0 00:05:17.875 EAL: Detected lcore 18 as core 20 on socket 0 00:05:17.875 EAL: Detected lcore 19 as core 21 on socket 0 00:05:17.875 EAL: Detected lcore 20 as core 22 on socket 0 00:05:17.875 EAL: Detected lcore 21 as core 24 on socket 0 00:05:17.875 EAL: Detected lcore 22 as core 25 on socket 0 00:05:17.875 EAL: Detected lcore 23 as core 26 on socket 0 00:05:17.875 EAL: Detected lcore 24 as core 27 on socket 0 00:05:17.875 EAL: Detected lcore 25 as core 28 on socket 0 00:05:17.875 EAL: Detected lcore 26 as core 29 on socket 0 00:05:17.875 EAL: Detected lcore 27 as core 30 on socket 0 00:05:17.875 EAL: Detected lcore 28 as core 0 on socket 1 00:05:17.875 EAL: Detected lcore 29 as core 1 on socket 1 00:05:17.875 EAL: Detected lcore 30 as core 2 on socket 1 00:05:17.875 EAL: Detected lcore 31 as core 3 on socket 1 00:05:17.875 EAL: Detected lcore 32 as core 4 on socket 1 00:05:17.875 EAL: Detected lcore 33 as core 5 on socket 1 00:05:17.875 EAL: Detected lcore 34 as core 6 on socket 1 00:05:17.875 EAL: Detected lcore 35 as core 8 on socket 1 00:05:17.875 EAL: Detected lcore 36 as core 9 on socket 1 00:05:17.875 EAL: Detected lcore 37 as core 10 on socket 1 00:05:17.875 EAL: Detected lcore 38 as core 11 on socket 1 00:05:17.875 EAL: Detected lcore 39 as core 12 on socket 1 00:05:17.875 EAL: Detected lcore 40 as core 13 on socket 1 00:05:17.875 EAL: Detected lcore 41 as core 14 on socket 1 00:05:17.875 EAL: Detected lcore 42 as core 16 on socket 1 00:05:17.875 EAL: Detected lcore 43 as core 17 on socket 1 00:05:17.875 EAL: Detected lcore 44 as core 18 on socket 1 00:05:17.875 EAL: Detected lcore 45 as core 19 on socket 1 00:05:17.875 EAL: Detected lcore 46 as core 20 on socket 1 00:05:17.875 EAL: Detected lcore 47 as core 21 on socket 1 00:05:17.875 EAL: Detected lcore 48 as core 22 on socket 1 00:05:17.875 EAL: Detected lcore 49 as core 24 on socket 1 00:05:17.875 EAL: Detected lcore 50 as core 25 on socket 1 00:05:17.875 EAL: Detected lcore 51 as core 26 on socket 1 00:05:17.875 EAL: Detected lcore 52 as core 27 on socket 1 00:05:17.875 EAL: Detected lcore 53 as core 28 on socket 1 00:05:17.875 EAL: Detected lcore 54 as core 29 on socket 1 00:05:17.875 EAL: Detected lcore 55 as core 30 on socket 1 00:05:17.875 EAL: Detected lcore 56 as core 0 on socket 0 00:05:17.875 EAL: Detected lcore 57 as core 1 on socket 0 00:05:17.875 EAL: Detected lcore 58 as core 2 on socket 0 00:05:17.875 EAL: Detected lcore 59 as core 3 on socket 0 00:05:17.875 EAL: Detected lcore 60 as core 4 on socket 0 00:05:17.875 EAL: Detected lcore 61 as core 5 on socket 0 00:05:17.875 EAL: Detected lcore 62 as core 6 on socket 0 00:05:17.875 EAL: Detected lcore 63 as core 8 on socket 0 00:05:17.875 EAL: Detected lcore 64 as core 9 on socket 0 00:05:17.875 EAL: Detected lcore 65 as core 10 on socket 0 00:05:17.875 EAL: Detected lcore 66 as core 11 on socket 0 00:05:17.875 EAL: Detected lcore 67 as core 12 on socket 0 00:05:17.875 EAL: Detected lcore 68 as core 13 on socket 0 00:05:17.875 EAL: Detected lcore 69 as core 14 on socket 0 00:05:17.875 EAL: Detected lcore 70 as core 16 on socket 0 00:05:17.875 EAL: Detected lcore 71 as core 17 on socket 0 00:05:17.875 EAL: Detected lcore 72 as core 18 on socket 0 00:05:17.875 EAL: Detected lcore 73 as core 19 on socket 0 00:05:17.875 EAL: Detected lcore 74 as core 20 on socket 0 00:05:17.875 EAL: Detected lcore 75 as core 21 on socket 0 00:05:17.875 EAL: Detected lcore 76 as core 22 on socket 0 00:05:17.875 EAL: Detected lcore 77 as core 24 on socket 0 00:05:17.875 EAL: Detected lcore 78 as core 25 on socket 0 00:05:17.875 EAL: Detected lcore 79 as core 26 on socket 0 00:05:17.875 EAL: Detected lcore 80 as core 27 on socket 0 00:05:17.875 EAL: Detected lcore 81 as core 28 on socket 0 00:05:17.875 EAL: Detected lcore 82 as core 29 on socket 0 00:05:17.875 EAL: Detected lcore 83 as core 30 on socket 0 00:05:17.875 EAL: Detected lcore 84 as core 0 on socket 1 00:05:17.875 EAL: Detected lcore 85 as core 1 on socket 1 00:05:17.875 EAL: Detected lcore 86 as core 2 on socket 1 00:05:17.875 EAL: Detected lcore 87 as core 3 on socket 1 00:05:17.875 EAL: Detected lcore 88 as core 4 on socket 1 00:05:17.875 EAL: Detected lcore 89 as core 5 on socket 1 00:05:17.875 EAL: Detected lcore 90 as core 6 on socket 1 00:05:17.875 EAL: Detected lcore 91 as core 8 on socket 1 00:05:17.875 EAL: Detected lcore 92 as core 9 on socket 1 00:05:17.875 EAL: Detected lcore 93 as core 10 on socket 1 00:05:17.875 EAL: Detected lcore 94 as core 11 on socket 1 00:05:17.875 EAL: Detected lcore 95 as core 12 on socket 1 00:05:17.875 EAL: Detected lcore 96 as core 13 on socket 1 00:05:17.875 EAL: Detected lcore 97 as core 14 on socket 1 00:05:17.875 EAL: Detected lcore 98 as core 16 on socket 1 00:05:17.875 EAL: Detected lcore 99 as core 17 on socket 1 00:05:17.875 EAL: Detected lcore 100 as core 18 on socket 1 00:05:17.875 EAL: Detected lcore 101 as core 19 on socket 1 00:05:17.875 EAL: Detected lcore 102 as core 20 on socket 1 00:05:17.875 EAL: Detected lcore 103 as core 21 on socket 1 00:05:17.875 EAL: Detected lcore 104 as core 22 on socket 1 00:05:17.875 EAL: Detected lcore 105 as core 24 on socket 1 00:05:17.875 EAL: Detected lcore 106 as core 25 on socket 1 00:05:17.875 EAL: Detected lcore 107 as core 26 on socket 1 00:05:17.875 EAL: Detected lcore 108 as core 27 on socket 1 00:05:17.875 EAL: Detected lcore 109 as core 28 on socket 1 00:05:17.875 EAL: Detected lcore 110 as core 29 on socket 1 00:05:17.875 EAL: Detected lcore 111 as core 30 on socket 1 00:05:17.875 EAL: Maximum logical cores by configuration: 128 00:05:17.875 EAL: Detected CPU lcores: 112 00:05:17.875 EAL: Detected NUMA nodes: 2 00:05:17.875 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:17.875 EAL: Detected shared linkage of DPDK 00:05:17.875 EAL: No shared files mode enabled, IPC will be disabled 00:05:17.875 EAL: Bus pci wants IOVA as 'DC' 00:05:17.875 EAL: Buses did not request a specific IOVA mode. 00:05:17.875 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:17.875 EAL: Selected IOVA mode 'VA' 00:05:18.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.135 EAL: Probing VFIO support... 00:05:18.135 EAL: IOMMU type 1 (Type 1) is supported 00:05:18.135 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:18.135 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:18.135 EAL: VFIO support initialized 00:05:18.135 EAL: Ask a virtual area of 0x2e000 bytes 00:05:18.135 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:18.135 EAL: Setting up physically contiguous memory... 00:05:18.135 EAL: Setting maximum number of open files to 524288 00:05:18.135 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:18.135 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:18.135 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:18.135 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:18.135 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.135 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:18.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.135 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.135 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:18.135 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:18.135 EAL: Hugepages will be freed exactly as allocated. 00:05:18.135 EAL: No shared files mode enabled, IPC is disabled 00:05:18.135 EAL: No shared files mode enabled, IPC is disabled 00:05:18.135 EAL: TSC frequency is ~2500000 KHz 00:05:18.135 EAL: Main lcore 0 is ready (tid=7f98b7f05a00;cpuset=[0]) 00:05:18.135 EAL: Trying to obtain current memory policy. 00:05:18.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.135 EAL: Restoring previous memory policy: 0 00:05:18.135 EAL: request: mp_malloc_sync 00:05:18.135 EAL: No shared files mode enabled, IPC is disabled 00:05:18.135 EAL: Heap on socket 0 was expanded by 2MB 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:18.136 EAL: Mem event callback 'spdk:(nil)' registered 00:05:18.136 00:05:18.136 00:05:18.136 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.136 http://cunit.sourceforge.net/ 00:05:18.136 00:05:18.136 00:05:18.136 Suite: components_suite 00:05:18.136 Test: vtophys_malloc_test ...passed 00:05:18.136 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 4MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was shrunk by 4MB 00:05:18.136 EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 6MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was shrunk by 6MB 00:05:18.136 EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 10MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was shrunk by 10MB 00:05:18.136 EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 18MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was shrunk by 18MB 00:05:18.136 EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 34MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was shrunk by 34MB 00:05:18.136 EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 66MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was shrunk by 66MB 00:05:18.136 EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 130MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was shrunk by 130MB 00:05:18.136 EAL: Trying to obtain current memory policy. 00:05:18.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.136 EAL: Restoring previous memory policy: 4 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.136 EAL: request: mp_malloc_sync 00:05:18.136 EAL: No shared files mode enabled, IPC is disabled 00:05:18.136 EAL: Heap on socket 0 was expanded by 258MB 00:05:18.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.395 EAL: request: mp_malloc_sync 00:05:18.395 EAL: No shared files mode enabled, IPC is disabled 00:05:18.395 EAL: Heap on socket 0 was shrunk by 258MB 00:05:18.395 EAL: Trying to obtain current memory policy. 00:05:18.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.395 EAL: Restoring previous memory policy: 4 00:05:18.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.395 EAL: request: mp_malloc_sync 00:05:18.395 EAL: No shared files mode enabled, IPC is disabled 00:05:18.395 EAL: Heap on socket 0 was expanded by 514MB 00:05:18.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.395 EAL: request: mp_malloc_sync 00:05:18.395 EAL: No shared files mode enabled, IPC is disabled 00:05:18.395 EAL: Heap on socket 0 was shrunk by 514MB 00:05:18.395 EAL: Trying to obtain current memory policy. 00:05:18.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.654 EAL: Restoring previous memory policy: 4 00:05:18.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.654 EAL: request: mp_malloc_sync 00:05:18.654 EAL: No shared files mode enabled, IPC is disabled 00:05:18.654 EAL: Heap on socket 0 was expanded by 1026MB 00:05:18.913 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.913 EAL: request: mp_malloc_sync 00:05:18.913 EAL: No shared files mode enabled, IPC is disabled 00:05:18.913 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:18.913 passed 00:05:18.913 00:05:18.913 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.913 suites 1 1 n/a 0 0 00:05:18.913 tests 2 2 2 0 0 00:05:18.913 asserts 497 497 497 0 n/a 00:05:18.913 00:05:18.913 Elapsed time = 0.965 seconds 00:05:18.913 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.913 EAL: request: mp_malloc_sync 00:05:18.913 EAL: No shared files mode enabled, IPC is disabled 00:05:18.913 EAL: Heap on socket 0 was shrunk by 2MB 00:05:18.913 EAL: No shared files mode enabled, IPC is disabled 00:05:18.913 EAL: No shared files mode enabled, IPC is disabled 00:05:18.913 EAL: No shared files mode enabled, IPC is disabled 00:05:18.913 00:05:18.913 real 0m1.089s 00:05:18.913 user 0m0.639s 00:05:18.913 sys 0m0.427s 00:05:18.913 23:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.913 23:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:18.913 ************************************ 00:05:18.913 END TEST env_vtophys 00:05:18.913 ************************************ 00:05:19.172 23:04:24 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:19.172 23:04:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.172 23:04:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.172 23:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:19.172 ************************************ 00:05:19.172 START TEST env_pci 00:05:19.172 ************************************ 00:05:19.172 23:04:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:19.172 00:05:19.172 00:05:19.172 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.172 http://cunit.sourceforge.net/ 00:05:19.172 00:05:19.172 00:05:19.172 Suite: pci 00:05:19.172 Test: pci_hook ...[2024-11-02 23:04:24.728692] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 440032 has claimed it 00:05:19.172 EAL: Cannot find device (10000:00:01.0) 00:05:19.172 EAL: Failed to attach device on primary process 00:05:19.172 passed 00:05:19.172 00:05:19.172 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.172 suites 1 1 n/a 0 0 00:05:19.172 tests 1 1 1 0 0 00:05:19.172 asserts 25 25 25 0 n/a 00:05:19.172 00:05:19.172 Elapsed time = 0.034 seconds 00:05:19.172 00:05:19.172 real 0m0.056s 00:05:19.172 user 0m0.014s 00:05:19.172 sys 0m0.042s 00:05:19.172 23:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.172 23:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:19.172 ************************************ 00:05:19.172 END TEST env_pci 00:05:19.172 ************************************ 00:05:19.172 23:04:24 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:19.172 23:04:24 -- env/env.sh@15 -- # uname 00:05:19.172 23:04:24 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:19.172 23:04:24 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:19.172 23:04:24 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.172 23:04:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:19.172 23:04:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.172 23:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:19.172 ************************************ 00:05:19.172 START TEST env_dpdk_post_init 00:05:19.172 ************************************ 00:05:19.172 23:04:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.172 EAL: Detected CPU lcores: 112 00:05:19.172 EAL: Detected NUMA nodes: 2 00:05:19.172 EAL: Detected shared linkage of DPDK 00:05:19.172 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.172 EAL: Selected IOVA mode 'VA' 00:05:19.172 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.172 EAL: VFIO support initialized 00:05:19.172 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.431 EAL: Using IOMMU type 1 (Type 1) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.431 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:19.431 EAL: Ignore mapping IO port bar(1) 00:05:19.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:19.432 EAL: Ignore mapping IO port bar(1) 00:05:19.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:19.432 EAL: Ignore mapping IO port bar(1) 00:05:19.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:20.369 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:24.559 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:24.559 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:24.559 Starting DPDK initialization... 00:05:24.559 Starting SPDK post initialization... 00:05:24.559 SPDK NVMe probe 00:05:24.559 Attaching to 0000:d8:00.0 00:05:24.559 Attached to 0000:d8:00.0 00:05:24.560 Cleaning up... 00:05:24.560 00:05:24.560 real 0m5.236s 00:05:24.560 user 0m3.884s 00:05:24.560 sys 0m0.406s 00:05:24.560 23:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.560 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 ************************************ 00:05:24.560 END TEST env_dpdk_post_init 00:05:24.560 ************************************ 00:05:24.560 23:04:30 -- env/env.sh@26 -- # uname 00:05:24.560 23:04:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.560 23:04:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.560 23:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.560 23:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.560 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 ************************************ 00:05:24.560 START TEST env_mem_callbacks 00:05:24.560 ************************************ 00:05:24.560 23:04:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.560 EAL: Detected CPU lcores: 112 00:05:24.560 EAL: Detected NUMA nodes: 2 00:05:24.560 EAL: Detected shared linkage of DPDK 00:05:24.560 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.560 EAL: Selected IOVA mode 'VA' 00:05:24.560 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.560 EAL: VFIO support initialized 00:05:24.560 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.560 00:05:24.560 00:05:24.560 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.560 http://cunit.sourceforge.net/ 00:05:24.560 00:05:24.560 00:05:24.560 Suite: memory 00:05:24.560 Test: test ... 00:05:24.560 register 0x200000200000 2097152 00:05:24.560 malloc 3145728 00:05:24.560 register 0x200000400000 4194304 00:05:24.560 buf 0x200000500000 len 3145728 PASSED 00:05:24.560 malloc 64 00:05:24.560 buf 0x2000004fff40 len 64 PASSED 00:05:24.560 malloc 4194304 00:05:24.560 register 0x200000800000 6291456 00:05:24.560 buf 0x200000a00000 len 4194304 PASSED 00:05:24.560 free 0x200000500000 3145728 00:05:24.560 free 0x2000004fff40 64 00:05:24.560 unregister 0x200000400000 4194304 PASSED 00:05:24.560 free 0x200000a00000 4194304 00:05:24.560 unregister 0x200000800000 6291456 PASSED 00:05:24.560 malloc 8388608 00:05:24.560 register 0x200000400000 10485760 00:05:24.560 buf 0x200000600000 len 8388608 PASSED 00:05:24.560 free 0x200000600000 8388608 00:05:24.560 unregister 0x200000400000 10485760 PASSED 00:05:24.560 passed 00:05:24.560 00:05:24.560 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.560 suites 1 1 n/a 0 0 00:05:24.560 tests 1 1 1 0 0 00:05:24.560 asserts 15 15 15 0 n/a 00:05:24.560 00:05:24.560 Elapsed time = 0.005 seconds 00:05:24.560 00:05:24.560 real 0m0.066s 00:05:24.560 user 0m0.015s 00:05:24.560 sys 0m0.051s 00:05:24.560 23:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.560 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 ************************************ 00:05:24.560 END TEST env_mem_callbacks 00:05:24.560 ************************************ 00:05:24.560 00:05:24.560 real 0m6.941s 00:05:24.560 user 0m4.796s 00:05:24.560 sys 0m1.228s 00:05:24.560 23:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.560 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 ************************************ 00:05:24.560 END TEST env 00:05:24.560 ************************************ 00:05:24.560 23:04:30 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:24.560 23:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.560 23:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.560 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.560 ************************************ 00:05:24.560 START TEST rpc 00:05:24.560 ************************************ 00:05:24.560 23:04:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:24.880 * Looking for test storage... 00:05:24.880 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:24.880 23:04:30 -- rpc/rpc.sh@65 -- # spdk_pid=441009 00:05:24.880 23:04:30 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:24.880 23:04:30 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.880 23:04:30 -- rpc/rpc.sh@67 -- # waitforlisten 441009 00:05:24.880 23:04:30 -- common/autotest_common.sh@819 -- # '[' -z 441009 ']' 00:05:24.880 23:04:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.880 23:04:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.880 23:04:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.880 23:04:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.880 23:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.880 [2024-11-02 23:04:30.407057] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:24.880 [2024-11-02 23:04:30.407114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441009 ] 00:05:24.880 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.880 [2024-11-02 23:04:30.475995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.880 [2024-11-02 23:04:30.549825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.880 [2024-11-02 23:04:30.549944] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:24.880 [2024-11-02 23:04:30.549954] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 441009' to capture a snapshot of events at runtime. 00:05:24.880 [2024-11-02 23:04:30.549963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid441009 for offline analysis/debug. 00:05:24.880 [2024-11-02 23:04:30.549989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.462 23:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.462 23:04:31 -- common/autotest_common.sh@852 -- # return 0 00:05:25.462 23:04:31 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:25.462 23:04:31 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:25.462 23:04:31 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.462 23:04:31 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.462 23:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.462 23:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.462 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.721 ************************************ 00:05:25.721 START TEST rpc_integrity 00:05:25.721 ************************************ 00:05:25.721 23:04:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:25.721 23:04:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.721 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.721 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.721 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.721 23:04:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.721 23:04:31 -- rpc/rpc.sh@13 -- # jq length 00:05:25.721 23:04:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.721 23:04:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.721 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.721 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.721 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.721 23:04:31 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.721 23:04:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.721 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.721 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.721 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.721 23:04:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.721 { 00:05:25.721 "name": "Malloc0", 00:05:25.721 "aliases": [ 00:05:25.721 "8322066c-eb97-4e9e-9dfa-195c4c6b09c4" 00:05:25.721 ], 00:05:25.721 "product_name": "Malloc disk", 00:05:25.721 "block_size": 512, 00:05:25.721 "num_blocks": 16384, 00:05:25.721 "uuid": "8322066c-eb97-4e9e-9dfa-195c4c6b09c4", 00:05:25.721 "assigned_rate_limits": { 00:05:25.721 "rw_ios_per_sec": 0, 00:05:25.721 "rw_mbytes_per_sec": 0, 00:05:25.721 "r_mbytes_per_sec": 0, 00:05:25.721 "w_mbytes_per_sec": 0 00:05:25.721 }, 00:05:25.721 "claimed": false, 00:05:25.721 "zoned": false, 00:05:25.721 "supported_io_types": { 00:05:25.721 "read": true, 00:05:25.721 "write": true, 00:05:25.721 "unmap": true, 00:05:25.721 "write_zeroes": true, 00:05:25.721 "flush": true, 00:05:25.721 "reset": true, 00:05:25.721 "compare": false, 00:05:25.721 "compare_and_write": false, 00:05:25.721 "abort": true, 00:05:25.721 "nvme_admin": false, 00:05:25.721 "nvme_io": false 00:05:25.721 }, 00:05:25.721 "memory_domains": [ 00:05:25.721 { 00:05:25.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.721 "dma_device_type": 2 00:05:25.721 } 00:05:25.721 ], 00:05:25.721 "driver_specific": {} 00:05:25.721 } 00:05:25.721 ]' 00:05:25.721 23:04:31 -- rpc/rpc.sh@17 -- # jq length 00:05:25.721 23:04:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.721 23:04:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.721 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.721 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.721 [2024-11-02 23:04:31.353107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.721 [2024-11-02 23:04:31.353137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.721 [2024-11-02 23:04:31.353150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aaaf40 00:05:25.721 [2024-11-02 23:04:31.353159] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.721 [2024-11-02 23:04:31.354170] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.721 [2024-11-02 23:04:31.354190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.721 Passthru0 00:05:25.721 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.721 23:04:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.721 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.721 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.721 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.721 23:04:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.721 { 00:05:25.721 "name": "Malloc0", 00:05:25.721 "aliases": [ 00:05:25.721 "8322066c-eb97-4e9e-9dfa-195c4c6b09c4" 00:05:25.721 ], 00:05:25.721 "product_name": "Malloc disk", 00:05:25.721 "block_size": 512, 00:05:25.721 "num_blocks": 16384, 00:05:25.721 "uuid": "8322066c-eb97-4e9e-9dfa-195c4c6b09c4", 00:05:25.721 "assigned_rate_limits": { 00:05:25.721 "rw_ios_per_sec": 0, 00:05:25.721 "rw_mbytes_per_sec": 0, 00:05:25.721 "r_mbytes_per_sec": 0, 00:05:25.721 "w_mbytes_per_sec": 0 00:05:25.721 }, 00:05:25.721 "claimed": true, 00:05:25.721 "claim_type": "exclusive_write", 00:05:25.721 "zoned": false, 00:05:25.722 "supported_io_types": { 00:05:25.722 "read": true, 00:05:25.722 "write": true, 00:05:25.722 "unmap": true, 00:05:25.722 "write_zeroes": true, 00:05:25.722 "flush": true, 00:05:25.722 "reset": true, 00:05:25.722 "compare": false, 00:05:25.722 "compare_and_write": false, 00:05:25.722 "abort": true, 00:05:25.722 "nvme_admin": false, 00:05:25.722 "nvme_io": false 00:05:25.722 }, 00:05:25.722 "memory_domains": [ 00:05:25.722 { 00:05:25.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.722 "dma_device_type": 2 00:05:25.722 } 00:05:25.722 ], 00:05:25.722 "driver_specific": {} 00:05:25.722 }, 00:05:25.722 { 00:05:25.722 "name": "Passthru0", 00:05:25.722 "aliases": [ 00:05:25.722 "813f0d0e-e16f-504c-9c67-9e39c5cd6fe4" 00:05:25.722 ], 00:05:25.722 "product_name": "passthru", 00:05:25.722 "block_size": 512, 00:05:25.722 "num_blocks": 16384, 00:05:25.722 "uuid": "813f0d0e-e16f-504c-9c67-9e39c5cd6fe4", 00:05:25.722 "assigned_rate_limits": { 00:05:25.722 "rw_ios_per_sec": 0, 00:05:25.722 "rw_mbytes_per_sec": 0, 00:05:25.722 "r_mbytes_per_sec": 0, 00:05:25.722 "w_mbytes_per_sec": 0 00:05:25.722 }, 00:05:25.722 "claimed": false, 00:05:25.722 "zoned": false, 00:05:25.722 "supported_io_types": { 00:05:25.722 "read": true, 00:05:25.722 "write": true, 00:05:25.722 "unmap": true, 00:05:25.722 "write_zeroes": true, 00:05:25.722 "flush": true, 00:05:25.722 "reset": true, 00:05:25.722 "compare": false, 00:05:25.722 "compare_and_write": false, 00:05:25.722 "abort": true, 00:05:25.722 "nvme_admin": false, 00:05:25.722 "nvme_io": false 00:05:25.722 }, 00:05:25.722 "memory_domains": [ 00:05:25.722 { 00:05:25.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.722 "dma_device_type": 2 00:05:25.722 } 00:05:25.722 ], 00:05:25.722 "driver_specific": { 00:05:25.722 "passthru": { 00:05:25.722 "name": "Passthru0", 00:05:25.722 "base_bdev_name": "Malloc0" 00:05:25.722 } 00:05:25.722 } 00:05:25.722 } 00:05:25.722 ]' 00:05:25.722 23:04:31 -- rpc/rpc.sh@21 -- # jq length 00:05:25.722 23:04:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.722 23:04:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.722 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.722 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.722 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.722 23:04:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.722 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.722 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.722 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.722 23:04:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.722 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.722 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.722 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.722 23:04:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.722 23:04:31 -- rpc/rpc.sh@26 -- # jq length 00:05:25.981 23:04:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.981 00:05:25.981 real 0m0.276s 00:05:25.981 user 0m0.162s 00:05:25.981 sys 0m0.050s 00:05:25.981 23:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 ************************************ 00:05:25.981 END TEST rpc_integrity 00:05:25.981 ************************************ 00:05:25.981 23:04:31 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.981 23:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.981 23:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 ************************************ 00:05:25.981 START TEST rpc_plugins 00:05:25.981 ************************************ 00:05:25.981 23:04:31 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:25.981 23:04:31 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.981 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.981 23:04:31 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:25.981 23:04:31 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:25.981 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.981 23:04:31 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:25.981 { 00:05:25.981 "name": "Malloc1", 00:05:25.981 "aliases": [ 00:05:25.981 "3af43ba0-6e0f-4275-8741-403e3d1d4dcd" 00:05:25.981 ], 00:05:25.981 "product_name": "Malloc disk", 00:05:25.981 "block_size": 4096, 00:05:25.981 "num_blocks": 256, 00:05:25.981 "uuid": "3af43ba0-6e0f-4275-8741-403e3d1d4dcd", 00:05:25.981 "assigned_rate_limits": { 00:05:25.981 "rw_ios_per_sec": 0, 00:05:25.981 "rw_mbytes_per_sec": 0, 00:05:25.981 "r_mbytes_per_sec": 0, 00:05:25.981 "w_mbytes_per_sec": 0 00:05:25.981 }, 00:05:25.981 "claimed": false, 00:05:25.981 "zoned": false, 00:05:25.981 "supported_io_types": { 00:05:25.981 "read": true, 00:05:25.981 "write": true, 00:05:25.981 "unmap": true, 00:05:25.981 "write_zeroes": true, 00:05:25.981 "flush": true, 00:05:25.981 "reset": true, 00:05:25.981 "compare": false, 00:05:25.981 "compare_and_write": false, 00:05:25.981 "abort": true, 00:05:25.981 "nvme_admin": false, 00:05:25.981 "nvme_io": false 00:05:25.981 }, 00:05:25.981 "memory_domains": [ 00:05:25.981 { 00:05:25.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.981 "dma_device_type": 2 00:05:25.981 } 00:05:25.981 ], 00:05:25.981 "driver_specific": {} 00:05:25.981 } 00:05:25.981 ]' 00:05:25.981 23:04:31 -- rpc/rpc.sh@32 -- # jq length 00:05:25.981 23:04:31 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:25.981 23:04:31 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:25.981 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.981 23:04:31 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:25.981 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.981 23:04:31 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:25.981 23:04:31 -- rpc/rpc.sh@36 -- # jq length 00:05:25.981 23:04:31 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:25.981 00:05:25.981 real 0m0.132s 00:05:25.981 user 0m0.081s 00:05:25.981 sys 0m0.021s 00:05:25.981 23:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 ************************************ 00:05:25.981 END TEST rpc_plugins 00:05:25.981 ************************************ 00:05:25.981 23:04:31 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:25.981 23:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.981 23:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 ************************************ 00:05:25.981 START TEST rpc_trace_cmd_test 00:05:25.981 ************************************ 00:05:25.981 23:04:31 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:25.981 23:04:31 -- rpc/rpc.sh@40 -- # local info 00:05:25.981 23:04:31 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:25.981 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.981 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.240 23:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.240 23:04:31 -- rpc/rpc.sh@42 -- # info='{ 00:05:26.240 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid441009", 00:05:26.240 "tpoint_group_mask": "0x8", 00:05:26.240 "iscsi_conn": { 00:05:26.240 "mask": "0x2", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "scsi": { 00:05:26.240 "mask": "0x4", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "bdev": { 00:05:26.240 "mask": "0x8", 00:05:26.240 "tpoint_mask": "0xffffffffffffffff" 00:05:26.240 }, 00:05:26.240 "nvmf_rdma": { 00:05:26.240 "mask": "0x10", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "nvmf_tcp": { 00:05:26.240 "mask": "0x20", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "ftl": { 00:05:26.240 "mask": "0x40", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "blobfs": { 00:05:26.240 "mask": "0x80", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "dsa": { 00:05:26.240 "mask": "0x200", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "thread": { 00:05:26.240 "mask": "0x400", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "nvme_pcie": { 00:05:26.240 "mask": "0x800", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "iaa": { 00:05:26.240 "mask": "0x1000", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "nvme_tcp": { 00:05:26.240 "mask": "0x2000", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 }, 00:05:26.240 "bdev_nvme": { 00:05:26.240 "mask": "0x4000", 00:05:26.240 "tpoint_mask": "0x0" 00:05:26.240 } 00:05:26.240 }' 00:05:26.240 23:04:31 -- rpc/rpc.sh@43 -- # jq length 00:05:26.240 23:04:31 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:26.240 23:04:31 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.240 23:04:31 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.240 23:04:31 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.240 23:04:31 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.240 23:04:31 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.240 23:04:31 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.240 23:04:31 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.240 23:04:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.240 00:05:26.240 real 0m0.227s 00:05:26.240 user 0m0.180s 00:05:26.240 sys 0m0.040s 00:05:26.240 23:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.240 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.240 ************************************ 00:05:26.240 END TEST rpc_trace_cmd_test 00:05:26.240 ************************************ 00:05:26.240 23:04:31 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.240 23:04:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.240 23:04:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.240 23:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.240 23:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.240 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 ************************************ 00:05:26.499 START TEST rpc_daemon_integrity 00:05:26.499 ************************************ 00:05:26.499 23:04:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:26.499 23:04:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.499 23:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.499 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.499 23:04:32 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.499 23:04:32 -- rpc/rpc.sh@13 -- # jq length 00:05:26.499 23:04:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.499 23:04:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.499 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.499 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.499 23:04:32 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.499 23:04:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.499 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.499 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.499 23:04:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.499 { 00:05:26.499 "name": "Malloc2", 00:05:26.499 "aliases": [ 00:05:26.499 "31312e57-27aa-47fc-b8f2-f03ee163e2d0" 00:05:26.499 ], 00:05:26.499 "product_name": "Malloc disk", 00:05:26.499 "block_size": 512, 00:05:26.499 "num_blocks": 16384, 00:05:26.499 "uuid": "31312e57-27aa-47fc-b8f2-f03ee163e2d0", 00:05:26.499 "assigned_rate_limits": { 00:05:26.499 "rw_ios_per_sec": 0, 00:05:26.499 "rw_mbytes_per_sec": 0, 00:05:26.499 "r_mbytes_per_sec": 0, 00:05:26.499 "w_mbytes_per_sec": 0 00:05:26.499 }, 00:05:26.499 "claimed": false, 00:05:26.499 "zoned": false, 00:05:26.499 "supported_io_types": { 00:05:26.499 "read": true, 00:05:26.499 "write": true, 00:05:26.499 "unmap": true, 00:05:26.499 "write_zeroes": true, 00:05:26.499 "flush": true, 00:05:26.499 "reset": true, 00:05:26.499 "compare": false, 00:05:26.499 "compare_and_write": false, 00:05:26.499 "abort": true, 00:05:26.499 "nvme_admin": false, 00:05:26.499 "nvme_io": false 00:05:26.499 }, 00:05:26.499 "memory_domains": [ 00:05:26.499 { 00:05:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.499 "dma_device_type": 2 00:05:26.499 } 00:05:26.499 ], 00:05:26.499 "driver_specific": {} 00:05:26.499 } 00:05:26.499 ]' 00:05:26.499 23:04:32 -- rpc/rpc.sh@17 -- # jq length 00:05:26.499 23:04:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.499 23:04:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.499 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.499 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 [2024-11-02 23:04:32.131215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.499 [2024-11-02 23:04:32.131243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.499 [2024-11-02 23:04:32.131257] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aac740 00:05:26.499 [2024-11-02 23:04:32.131265] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.499 [2024-11-02 23:04:32.132168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.499 [2024-11-02 23:04:32.132187] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.499 Passthru0 00:05:26.499 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.499 23:04:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.499 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.499 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.499 23:04:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.499 { 00:05:26.499 "name": "Malloc2", 00:05:26.499 "aliases": [ 00:05:26.499 "31312e57-27aa-47fc-b8f2-f03ee163e2d0" 00:05:26.499 ], 00:05:26.499 "product_name": "Malloc disk", 00:05:26.499 "block_size": 512, 00:05:26.499 "num_blocks": 16384, 00:05:26.499 "uuid": "31312e57-27aa-47fc-b8f2-f03ee163e2d0", 00:05:26.499 "assigned_rate_limits": { 00:05:26.499 "rw_ios_per_sec": 0, 00:05:26.499 "rw_mbytes_per_sec": 0, 00:05:26.499 "r_mbytes_per_sec": 0, 00:05:26.499 "w_mbytes_per_sec": 0 00:05:26.499 }, 00:05:26.499 "claimed": true, 00:05:26.499 "claim_type": "exclusive_write", 00:05:26.499 "zoned": false, 00:05:26.499 "supported_io_types": { 00:05:26.499 "read": true, 00:05:26.499 "write": true, 00:05:26.499 "unmap": true, 00:05:26.499 "write_zeroes": true, 00:05:26.499 "flush": true, 00:05:26.499 "reset": true, 00:05:26.499 "compare": false, 00:05:26.499 "compare_and_write": false, 00:05:26.499 "abort": true, 00:05:26.499 "nvme_admin": false, 00:05:26.499 "nvme_io": false 00:05:26.499 }, 00:05:26.499 "memory_domains": [ 00:05:26.499 { 00:05:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.499 "dma_device_type": 2 00:05:26.499 } 00:05:26.499 ], 00:05:26.499 "driver_specific": {} 00:05:26.499 }, 00:05:26.499 { 00:05:26.499 "name": "Passthru0", 00:05:26.499 "aliases": [ 00:05:26.499 "439286da-f68c-576c-ba9d-005c9016fee6" 00:05:26.499 ], 00:05:26.499 "product_name": "passthru", 00:05:26.499 "block_size": 512, 00:05:26.499 "num_blocks": 16384, 00:05:26.499 "uuid": "439286da-f68c-576c-ba9d-005c9016fee6", 00:05:26.499 "assigned_rate_limits": { 00:05:26.499 "rw_ios_per_sec": 0, 00:05:26.499 "rw_mbytes_per_sec": 0, 00:05:26.499 "r_mbytes_per_sec": 0, 00:05:26.499 "w_mbytes_per_sec": 0 00:05:26.499 }, 00:05:26.499 "claimed": false, 00:05:26.499 "zoned": false, 00:05:26.499 "supported_io_types": { 00:05:26.499 "read": true, 00:05:26.499 "write": true, 00:05:26.499 "unmap": true, 00:05:26.499 "write_zeroes": true, 00:05:26.499 "flush": true, 00:05:26.499 "reset": true, 00:05:26.499 "compare": false, 00:05:26.499 "compare_and_write": false, 00:05:26.499 "abort": true, 00:05:26.499 "nvme_admin": false, 00:05:26.499 "nvme_io": false 00:05:26.499 }, 00:05:26.499 "memory_domains": [ 00:05:26.499 { 00:05:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.499 "dma_device_type": 2 00:05:26.499 } 00:05:26.499 ], 00:05:26.499 "driver_specific": { 00:05:26.499 "passthru": { 00:05:26.499 "name": "Passthru0", 00:05:26.499 "base_bdev_name": "Malloc2" 00:05:26.499 } 00:05:26.499 } 00:05:26.499 } 00:05:26.499 ]' 00:05:26.499 23:04:32 -- rpc/rpc.sh@21 -- # jq length 00:05:26.499 23:04:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.499 23:04:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.499 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.499 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.499 23:04:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:26.500 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.500 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.500 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.500 23:04:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.500 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.500 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.500 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.500 23:04:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.500 23:04:32 -- rpc/rpc.sh@26 -- # jq length 00:05:26.759 23:04:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.759 00:05:26.759 real 0m0.280s 00:05:26.759 user 0m0.173s 00:05:26.759 sys 0m0.050s 00:05:26.759 23:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.759 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.759 ************************************ 00:05:26.759 END TEST rpc_daemon_integrity 00:05:26.759 ************************************ 00:05:26.759 23:04:32 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.759 23:04:32 -- rpc/rpc.sh@84 -- # killprocess 441009 00:05:26.759 23:04:32 -- common/autotest_common.sh@926 -- # '[' -z 441009 ']' 00:05:26.759 23:04:32 -- common/autotest_common.sh@930 -- # kill -0 441009 00:05:26.759 23:04:32 -- common/autotest_common.sh@931 -- # uname 00:05:26.759 23:04:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:26.759 23:04:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 441009 00:05:26.759 23:04:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:26.759 23:04:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:26.759 23:04:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 441009' 00:05:26.759 killing process with pid 441009 00:05:26.759 23:04:32 -- common/autotest_common.sh@945 -- # kill 441009 00:05:26.759 23:04:32 -- common/autotest_common.sh@950 -- # wait 441009 00:05:27.018 00:05:27.018 real 0m2.456s 00:05:27.018 user 0m3.089s 00:05:27.018 sys 0m0.751s 00:05:27.018 23:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.018 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.018 ************************************ 00:05:27.018 END TEST rpc 00:05:27.018 ************************************ 00:05:27.018 23:04:32 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.018 23:04:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.018 23:04:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.018 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.018 ************************************ 00:05:27.018 START TEST rpc_client 00:05:27.018 ************************************ 00:05:27.018 23:04:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.277 * Looking for test storage... 00:05:27.277 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:27.277 23:04:32 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.277 OK 00:05:27.277 23:04:32 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.277 00:05:27.277 real 0m0.129s 00:05:27.277 user 0m0.043s 00:05:27.277 sys 0m0.095s 00:05:27.277 23:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.277 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.277 ************************************ 00:05:27.277 END TEST rpc_client 00:05:27.277 ************************************ 00:05:27.277 23:04:32 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.277 23:04:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.277 23:04:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.277 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.277 ************************************ 00:05:27.277 START TEST json_config 00:05:27.277 ************************************ 00:05:27.277 23:04:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.277 23:04:33 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.277 23:04:33 -- nvmf/common.sh@7 -- # uname -s 00:05:27.277 23:04:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.277 23:04:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.277 23:04:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.277 23:04:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.277 23:04:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.277 23:04:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.277 23:04:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.277 23:04:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.277 23:04:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.277 23:04:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.277 23:04:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:27.277 23:04:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:27.277 23:04:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.277 23:04:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.277 23:04:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.277 23:04:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:27.277 23:04:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.277 23:04:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.277 23:04:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.277 23:04:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.277 23:04:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.277 23:04:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.277 23:04:33 -- paths/export.sh@5 -- # export PATH 00:05:27.278 23:04:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.278 23:04:33 -- nvmf/common.sh@46 -- # : 0 00:05:27.278 23:04:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:27.278 23:04:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:27.278 23:04:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:27.278 23:04:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.537 23:04:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.537 23:04:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:27.537 23:04:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:27.537 23:04:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:27.537 23:04:33 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:27.537 23:04:33 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:27.537 23:04:33 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:27.537 23:04:33 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.537 23:04:33 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:27.537 23:04:33 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:27.537 23:04:33 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:27.537 23:04:33 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:27.537 23:04:33 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:27.537 23:04:33 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:27.537 23:04:33 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:27.537 23:04:33 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:27.537 23:04:33 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:27.537 23:04:33 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.537 23:04:33 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:27.537 INFO: JSON configuration test init 00:05:27.537 23:04:33 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:27.537 23:04:33 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:27.537 23:04:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.537 23:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:27.537 23:04:33 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:27.537 23:04:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.537 23:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:27.537 23:04:33 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:27.537 23:04:33 -- json_config/json_config.sh@98 -- # local app=target 00:05:27.537 23:04:33 -- json_config/json_config.sh@99 -- # shift 00:05:27.537 23:04:33 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:27.537 23:04:33 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:27.537 23:04:33 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:27.537 23:04:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:27.537 23:04:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:27.537 23:04:33 -- json_config/json_config.sh@111 -- # app_pid[$app]=441721 00:05:27.537 23:04:33 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:27.537 Waiting for target to run... 00:05:27.537 23:04:33 -- json_config/json_config.sh@114 -- # waitforlisten 441721 /var/tmp/spdk_tgt.sock 00:05:27.537 23:04:33 -- common/autotest_common.sh@819 -- # '[' -z 441721 ']' 00:05:27.537 23:04:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.537 23:04:33 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:27.537 23:04:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.537 23:04:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.537 23:04:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.537 23:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:27.537 [2024-11-02 23:04:33.103028] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:27.537 [2024-11-02 23:04:33.103080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441721 ] 00:05:27.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.105 [2024-11-02 23:04:33.564648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.105 [2024-11-02 23:04:33.650561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.105 [2024-11-02 23:04:33.650684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.364 23:04:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.364 23:04:33 -- common/autotest_common.sh@852 -- # return 0 00:05:28.364 23:04:33 -- json_config/json_config.sh@115 -- # echo '' 00:05:28.364 00:05:28.364 23:04:33 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:28.364 23:04:33 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:28.364 23:04:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.364 23:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:28.364 23:04:33 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:28.364 23:04:33 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:28.364 23:04:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.364 23:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:28.364 23:04:33 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.364 23:04:33 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:28.364 23:04:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:31.654 23:04:37 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:31.654 23:04:37 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:31.654 23:04:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.654 23:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:31.654 23:04:37 -- json_config/json_config.sh@48 -- # local ret=0 00:05:31.654 23:04:37 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:31.654 23:04:37 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:31.654 23:04:37 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:31.654 23:04:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:31.654 23:04:37 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:31.654 23:04:37 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:31.654 23:04:37 -- json_config/json_config.sh@51 -- # local get_types 00:05:31.654 23:04:37 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:31.654 23:04:37 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:31.654 23:04:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.654 23:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:31.654 23:04:37 -- json_config/json_config.sh@58 -- # return 0 00:05:31.654 23:04:37 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:31.654 23:04:37 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:31.654 23:04:37 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:31.654 23:04:37 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:31.654 23:04:37 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:31.654 23:04:37 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:31.654 23:04:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.654 23:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:31.654 23:04:37 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:31.654 23:04:37 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:31.654 23:04:37 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:31.654 23:04:37 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:31.654 23:04:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:31.654 23:04:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.654 23:04:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:31.654 23:04:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:31.654 23:04:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:31.654 23:04:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.654 23:04:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:31.654 23:04:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.654 23:04:37 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:31.654 23:04:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:31.654 23:04:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:31.654 23:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 23:04:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:39.770 23:04:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:39.770 23:04:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:39.770 23:04:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:39.770 23:04:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:39.770 23:04:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:39.770 23:04:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:39.770 23:04:44 -- nvmf/common.sh@294 -- # net_devs=() 00:05:39.770 23:04:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:39.770 23:04:44 -- nvmf/common.sh@295 -- # e810=() 00:05:39.770 23:04:44 -- nvmf/common.sh@295 -- # local -ga e810 00:05:39.770 23:04:44 -- nvmf/common.sh@296 -- # x722=() 00:05:39.770 23:04:44 -- nvmf/common.sh@296 -- # local -ga x722 00:05:39.770 23:04:44 -- nvmf/common.sh@297 -- # mlx=() 00:05:39.770 23:04:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:39.770 23:04:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:39.770 23:04:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:39.770 23:04:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:39.770 23:04:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:39.770 23:04:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:39.770 23:04:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:39.770 23:04:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:39.770 23:04:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:39.770 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:39.770 23:04:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:39.770 23:04:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:39.770 23:04:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:39.770 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:39.770 23:04:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:39.770 23:04:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:39.770 23:04:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:39.770 23:04:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.770 23:04:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:39.770 23:04:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.770 23:04:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:39.770 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:39.770 23:04:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.770 23:04:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:39.770 23:04:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.770 23:04:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:39.770 23:04:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.770 23:04:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:39.770 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:39.770 23:04:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.770 23:04:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:39.770 23:04:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:39.770 23:04:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:39.770 23:04:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:39.770 23:04:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:39.770 23:04:44 -- nvmf/common.sh@57 -- # uname 00:05:39.770 23:04:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:39.770 23:04:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:39.770 23:04:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:39.770 23:04:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:39.770 23:04:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:39.770 23:04:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:39.770 23:04:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:39.770 23:04:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:39.770 23:04:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:39.770 23:04:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:39.770 23:04:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:39.770 23:04:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:39.770 23:04:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:39.770 23:04:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:39.770 23:04:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:39.770 23:04:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:39.770 23:04:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@104 -- # continue 2 00:05:39.771 23:04:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@104 -- # continue 2 00:05:39.771 23:04:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:39.771 23:04:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:39.771 23:04:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:05:39.771 23:04:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:39.771 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:39.771 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:39.771 altname enp217s0f0np0 00:05:39.771 altname ens818f0np0 00:05:39.771 inet 192.168.100.8/24 scope global mlx_0_0 00:05:39.771 valid_lft forever preferred_lft forever 00:05:39.771 23:04:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:39.771 23:04:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:39.771 23:04:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:05:39.771 23:04:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:39.771 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:39.771 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:39.771 altname enp217s0f1np1 00:05:39.771 altname ens818f1np1 00:05:39.771 inet 192.168.100.9/24 scope global mlx_0_1 00:05:39.771 valid_lft forever preferred_lft forever 00:05:39.771 23:04:44 -- nvmf/common.sh@410 -- # return 0 00:05:39.771 23:04:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:39.771 23:04:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:39.771 23:04:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:39.771 23:04:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:39.771 23:04:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:39.771 23:04:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:39.771 23:04:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:39.771 23:04:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:39.771 23:04:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:39.771 23:04:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@104 -- # continue 2 00:05:39.771 23:04:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:39.771 23:04:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:39.771 23:04:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@104 -- # continue 2 00:05:39.771 23:04:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:39.771 23:04:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:39.771 23:04:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:39.771 23:04:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:39.771 23:04:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:39.771 23:04:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:39.771 192.168.100.9' 00:05:39.771 23:04:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:39.771 192.168.100.9' 00:05:39.771 23:04:44 -- nvmf/common.sh@445 -- # head -n 1 00:05:39.771 23:04:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:39.771 23:04:44 -- nvmf/common.sh@446 -- # tail -n +2 00:05:39.771 23:04:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:39.771 192.168.100.9' 00:05:39.771 23:04:44 -- nvmf/common.sh@446 -- # head -n 1 00:05:39.771 23:04:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:39.771 23:04:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:39.771 23:04:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:39.771 23:04:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:39.771 23:04:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:39.771 23:04:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:39.771 23:04:44 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:39.771 23:04:44 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:39.771 23:04:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:39.771 MallocForNvmf0 00:05:39.771 23:04:44 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:39.771 23:04:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:39.771 MallocForNvmf1 00:05:39.771 23:04:44 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:39.771 23:04:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:39.771 [2024-11-02 23:04:44.807375] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:39.771 [2024-11-02 23:04:44.839858] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d87040/0x1d93ce0) succeed. 00:05:39.771 [2024-11-02 23:04:44.852065] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d89230/0x1dd5380) succeed. 00:05:39.771 23:04:44 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.771 23:04:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.771 23:04:45 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.771 23:04:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.771 23:04:45 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.771 23:04:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.771 23:04:45 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:39.771 23:04:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:40.030 [2024-11-02 23:04:45.545113] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:40.030 23:04:45 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:40.030 23:04:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.030 23:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.030 23:04:45 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:40.030 23:04:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.030 23:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.030 23:04:45 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:40.030 23:04:45 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:40.030 23:04:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:40.289 MallocBdevForConfigChangeCheck 00:05:40.289 23:04:45 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:40.289 23:04:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.289 23:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.289 23:04:45 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:40.289 23:04:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.548 23:04:46 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:40.548 INFO: shutting down applications... 00:05:40.548 23:04:46 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:40.548 23:04:46 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:40.548 23:04:46 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:40.548 23:04:46 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:43.082 Calling clear_iscsi_subsystem 00:05:43.082 Calling clear_nvmf_subsystem 00:05:43.082 Calling clear_nbd_subsystem 00:05:43.082 Calling clear_ublk_subsystem 00:05:43.082 Calling clear_vhost_blk_subsystem 00:05:43.082 Calling clear_vhost_scsi_subsystem 00:05:43.082 Calling clear_scheduler_subsystem 00:05:43.082 Calling clear_bdev_subsystem 00:05:43.082 Calling clear_accel_subsystem 00:05:43.082 Calling clear_vmd_subsystem 00:05:43.082 Calling clear_sock_subsystem 00:05:43.082 Calling clear_iobuf_subsystem 00:05:43.082 23:04:48 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:43.082 23:04:48 -- json_config/json_config.sh@396 -- # count=100 00:05:43.082 23:04:48 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:43.082 23:04:48 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.082 23:04:48 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:43.082 23:04:48 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:43.341 23:04:49 -- json_config/json_config.sh@398 -- # break 00:05:43.341 23:04:49 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:43.341 23:04:49 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:43.341 23:04:49 -- json_config/json_config.sh@120 -- # local app=target 00:05:43.341 23:04:49 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:43.341 23:04:49 -- json_config/json_config.sh@124 -- # [[ -n 441721 ]] 00:05:43.341 23:04:49 -- json_config/json_config.sh@127 -- # kill -SIGINT 441721 00:05:43.341 23:04:49 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:43.341 23:04:49 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:43.341 23:04:49 -- json_config/json_config.sh@130 -- # kill -0 441721 00:05:43.341 23:04:49 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:43.910 23:04:49 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:43.910 23:04:49 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:43.910 23:04:49 -- json_config/json_config.sh@130 -- # kill -0 441721 00:05:43.910 23:04:49 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:43.910 23:04:49 -- json_config/json_config.sh@132 -- # break 00:05:43.910 23:04:49 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:43.910 23:04:49 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:43.910 SPDK target shutdown done 00:05:43.910 23:04:49 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:43.910 INFO: relaunching applications... 00:05:43.910 23:04:49 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.910 23:04:49 -- json_config/json_config.sh@98 -- # local app=target 00:05:43.910 23:04:49 -- json_config/json_config.sh@99 -- # shift 00:05:43.910 23:04:49 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:43.910 23:04:49 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:43.910 23:04:49 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:43.910 23:04:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.910 23:04:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.910 23:04:49 -- json_config/json_config.sh@111 -- # app_pid[$app]=446853 00:05:43.910 23:04:49 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:43.910 Waiting for target to run... 00:05:43.910 23:04:49 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.910 23:04:49 -- json_config/json_config.sh@114 -- # waitforlisten 446853 /var/tmp/spdk_tgt.sock 00:05:43.910 23:04:49 -- common/autotest_common.sh@819 -- # '[' -z 446853 ']' 00:05:43.910 23:04:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.910 23:04:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.910 23:04:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.910 23:04:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.910 23:04:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.910 [2024-11-02 23:04:49.618732] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:43.910 [2024-11-02 23:04:49.618789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446853 ] 00:05:43.910 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.169 [2024-11-02 23:04:49.915721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.427 [2024-11-02 23:04:49.977771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.427 [2024-11-02 23:04:49.977873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.717 [2024-11-02 23:04:53.021044] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2274970/0x2232db0) succeed. 00:05:47.717 [2024-11-02 23:04:53.032091] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2276b60/0x20e11d0) succeed. 00:05:47.717 [2024-11-02 23:04:53.081604] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:48.285 23:04:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.285 23:04:53 -- common/autotest_common.sh@852 -- # return 0 00:05:48.285 23:04:53 -- json_config/json_config.sh@115 -- # echo '' 00:05:48.285 00:05:48.285 23:04:53 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:48.285 23:04:53 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:48.285 INFO: Checking if target configuration is the same... 00:05:48.285 23:04:53 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:48.285 23:04:53 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.285 23:04:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.285 + '[' 2 -ne 2 ']' 00:05:48.285 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:48.285 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:48.285 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:48.285 +++ basename /dev/fd/62 00:05:48.285 ++ mktemp /tmp/62.XXX 00:05:48.285 + tmp_file_1=/tmp/62.Cuq 00:05:48.285 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.285 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.285 + tmp_file_2=/tmp/spdk_tgt_config.json.lXK 00:05:48.285 + ret=0 00:05:48.285 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.544 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.544 + diff -u /tmp/62.Cuq /tmp/spdk_tgt_config.json.lXK 00:05:48.544 + echo 'INFO: JSON config files are the same' 00:05:48.544 INFO: JSON config files are the same 00:05:48.544 + rm /tmp/62.Cuq /tmp/spdk_tgt_config.json.lXK 00:05:48.544 + exit 0 00:05:48.544 23:04:54 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:48.544 23:04:54 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:48.544 INFO: changing configuration and checking if this can be detected... 00:05:48.544 23:04:54 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.544 23:04:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.544 23:04:54 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.544 23:04:54 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:48.544 23:04:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.544 + '[' 2 -ne 2 ']' 00:05:48.544 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:48.803 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:48.803 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:48.803 +++ basename /dev/fd/62 00:05:48.803 ++ mktemp /tmp/62.XXX 00:05:48.803 + tmp_file_1=/tmp/62.EVD 00:05:48.803 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.803 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.803 + tmp_file_2=/tmp/spdk_tgt_config.json.9JS 00:05:48.803 + ret=0 00:05:48.803 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:49.063 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:49.063 + diff -u /tmp/62.EVD /tmp/spdk_tgt_config.json.9JS 00:05:49.063 + ret=1 00:05:49.063 + echo '=== Start of file: /tmp/62.EVD ===' 00:05:49.063 + cat /tmp/62.EVD 00:05:49.063 + echo '=== End of file: /tmp/62.EVD ===' 00:05:49.063 + echo '' 00:05:49.063 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9JS ===' 00:05:49.063 + cat /tmp/spdk_tgt_config.json.9JS 00:05:49.063 + echo '=== End of file: /tmp/spdk_tgt_config.json.9JS ===' 00:05:49.063 + echo '' 00:05:49.063 + rm /tmp/62.EVD /tmp/spdk_tgt_config.json.9JS 00:05:49.063 + exit 1 00:05:49.063 23:04:54 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:49.063 INFO: configuration change detected. 00:05:49.063 23:04:54 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:49.063 23:04:54 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:49.063 23:04:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:49.063 23:04:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.063 23:04:54 -- json_config/json_config.sh@360 -- # local ret=0 00:05:49.063 23:04:54 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:49.063 23:04:54 -- json_config/json_config.sh@370 -- # [[ -n 446853 ]] 00:05:49.063 23:04:54 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:49.063 23:04:54 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:49.063 23:04:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:49.063 23:04:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.063 23:04:54 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:49.063 23:04:54 -- json_config/json_config.sh@246 -- # uname -s 00:05:49.063 23:04:54 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:49.063 23:04:54 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:49.063 23:04:54 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:49.063 23:04:54 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:49.063 23:04:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.063 23:04:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.063 23:04:54 -- json_config/json_config.sh@376 -- # killprocess 446853 00:05:49.063 23:04:54 -- common/autotest_common.sh@926 -- # '[' -z 446853 ']' 00:05:49.063 23:04:54 -- common/autotest_common.sh@930 -- # kill -0 446853 00:05:49.063 23:04:54 -- common/autotest_common.sh@931 -- # uname 00:05:49.063 23:04:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.063 23:04:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 446853 00:05:49.063 23:04:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.063 23:04:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.063 23:04:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 446853' 00:05:49.063 killing process with pid 446853 00:05:49.063 23:04:54 -- common/autotest_common.sh@945 -- # kill 446853 00:05:49.063 23:04:54 -- common/autotest_common.sh@950 -- # wait 446853 00:05:51.597 23:04:57 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.597 23:04:57 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:51.597 23:04:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.597 23:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.856 23:04:57 -- json_config/json_config.sh@381 -- # return 0 00:05:51.856 23:04:57 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:51.856 INFO: Success 00:05:51.856 23:04:57 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:51.856 23:04:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:51.856 23:04:57 -- nvmf/common.sh@116 -- # sync 00:05:51.856 23:04:57 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:51.856 23:04:57 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:51.856 23:04:57 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:51.856 23:04:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:51.856 23:04:57 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:51.856 00:05:51.856 real 0m24.460s 00:05:51.856 user 0m27.445s 00:05:51.856 sys 0m7.600s 00:05:51.856 23:04:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.856 23:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.856 ************************************ 00:05:51.856 END TEST json_config 00:05:51.856 ************************************ 00:05:51.856 23:04:57 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:51.856 23:04:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.856 23:04:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.856 23:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.856 ************************************ 00:05:51.856 START TEST json_config_extra_key 00:05:51.856 ************************************ 00:05:51.856 23:04:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:51.856 23:04:57 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.857 23:04:57 -- nvmf/common.sh@7 -- # uname -s 00:05:51.857 23:04:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.857 23:04:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.857 23:04:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.857 23:04:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.857 23:04:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.857 23:04:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.857 23:04:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.857 23:04:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.857 23:04:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.857 23:04:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.857 23:04:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:51.857 23:04:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:51.857 23:04:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.857 23:04:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.857 23:04:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.857 23:04:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:51.857 23:04:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.857 23:04:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.857 23:04:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.857 23:04:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.857 23:04:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.857 23:04:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.857 23:04:57 -- paths/export.sh@5 -- # export PATH 00:05:51.857 23:04:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.857 23:04:57 -- nvmf/common.sh@46 -- # : 0 00:05:51.857 23:04:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:51.857 23:04:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:51.857 23:04:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:51.857 23:04:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.857 23:04:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.857 23:04:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:51.857 23:04:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:51.857 23:04:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:51.857 INFO: launching applications... 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=448326 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:51.857 Waiting for target to run... 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 448326 /var/tmp/spdk_tgt.sock 00:05:51.857 23:04:57 -- common/autotest_common.sh@819 -- # '[' -z 448326 ']' 00:05:51.857 23:04:57 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:51.857 23:04:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.857 23:04:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.857 23:04:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.857 23:04:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.857 23:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.857 [2024-11-02 23:04:57.598187] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:51.857 [2024-11-02 23:04:57.598245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448326 ] 00:05:52.116 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.375 [2024-11-02 23:04:57.893361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.375 [2024-11-02 23:04:57.954503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.375 [2024-11-02 23:04:57.954618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.943 23:04:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.943 23:04:58 -- common/autotest_common.sh@852 -- # return 0 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:52.943 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:52.943 INFO: shutting down applications... 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 448326 ]] 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 448326 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@50 -- # kill -0 448326 00:05:52.943 23:04:58 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@50 -- # kill -0 448326 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:53.202 SPDK target shutdown done 00:05:53.202 23:04:58 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:53.202 Success 00:05:53.202 00:05:53.202 real 0m1.469s 00:05:53.202 user 0m1.240s 00:05:53.202 sys 0m0.410s 00:05:53.202 23:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.202 23:04:58 -- common/autotest_common.sh@10 -- # set +x 00:05:53.202 ************************************ 00:05:53.202 END TEST json_config_extra_key 00:05:53.202 ************************************ 00:05:53.202 23:04:58 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.202 23:04:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.202 23:04:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.202 23:04:58 -- common/autotest_common.sh@10 -- # set +x 00:05:53.202 ************************************ 00:05:53.202 START TEST alias_rpc 00:05:53.202 ************************************ 00:05:53.202 23:04:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.462 * Looking for test storage... 00:05:53.462 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:53.462 23:04:59 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.462 23:04:59 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=448645 00:05:53.462 23:04:59 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 448645 00:05:53.462 23:04:59 -- common/autotest_common.sh@819 -- # '[' -z 448645 ']' 00:05:53.462 23:04:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.462 23:04:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.462 23:04:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.462 23:04:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.462 23:04:59 -- common/autotest_common.sh@10 -- # set +x 00:05:53.462 23:04:59 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.462 [2024-11-02 23:04:59.104728] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:53.462 [2024-11-02 23:04:59.104783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448645 ] 00:05:53.462 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.462 [2024-11-02 23:04:59.173331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.721 [2024-11-02 23:04:59.246161] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.721 [2024-11-02 23:04:59.246272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.289 23:04:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.289 23:04:59 -- common/autotest_common.sh@852 -- # return 0 00:05:54.289 23:04:59 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:54.547 23:05:00 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 448645 00:05:54.547 23:05:00 -- common/autotest_common.sh@926 -- # '[' -z 448645 ']' 00:05:54.547 23:05:00 -- common/autotest_common.sh@930 -- # kill -0 448645 00:05:54.547 23:05:00 -- common/autotest_common.sh@931 -- # uname 00:05:54.547 23:05:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.547 23:05:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 448645 00:05:54.547 23:05:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.547 23:05:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.547 23:05:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 448645' 00:05:54.547 killing process with pid 448645 00:05:54.547 23:05:00 -- common/autotest_common.sh@945 -- # kill 448645 00:05:54.547 23:05:00 -- common/autotest_common.sh@950 -- # wait 448645 00:05:54.806 00:05:54.806 real 0m1.555s 00:05:54.806 user 0m1.683s 00:05:54.806 sys 0m0.442s 00:05:54.806 23:05:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.806 23:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.806 ************************************ 00:05:54.806 END TEST alias_rpc 00:05:54.806 ************************************ 00:05:54.806 23:05:00 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:54.806 23:05:00 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.806 23:05:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.806 23:05:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.806 23:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.806 ************************************ 00:05:54.806 START TEST spdkcli_tcp 00:05:54.806 ************************************ 00:05:54.806 23:05:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:55.066 * Looking for test storage... 00:05:55.066 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:55.066 23:05:00 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:55.066 23:05:00 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:55.066 23:05:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:55.066 23:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=448963 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@27 -- # waitforlisten 448963 00:05:55.066 23:05:00 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:55.066 23:05:00 -- common/autotest_common.sh@819 -- # '[' -z 448963 ']' 00:05:55.066 23:05:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.066 23:05:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.066 23:05:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.066 23:05:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.066 23:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.066 [2024-11-02 23:05:00.709775] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:55.066 [2024-11-02 23:05:00.709833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448963 ] 00:05:55.066 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.066 [2024-11-02 23:05:00.780699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.325 [2024-11-02 23:05:00.855162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.325 [2024-11-02 23:05:00.855305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.325 [2024-11-02 23:05:00.855308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.893 23:05:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.893 23:05:01 -- common/autotest_common.sh@852 -- # return 0 00:05:55.893 23:05:01 -- spdkcli/tcp.sh@31 -- # socat_pid=449163 00:05:55.893 23:05:01 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.893 23:05:01 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.154 [ 00:05:56.154 "bdev_malloc_delete", 00:05:56.154 "bdev_malloc_create", 00:05:56.154 "bdev_null_resize", 00:05:56.154 "bdev_null_delete", 00:05:56.154 "bdev_null_create", 00:05:56.154 "bdev_nvme_cuse_unregister", 00:05:56.154 "bdev_nvme_cuse_register", 00:05:56.154 "bdev_opal_new_user", 00:05:56.154 "bdev_opal_set_lock_state", 00:05:56.154 "bdev_opal_delete", 00:05:56.154 "bdev_opal_get_info", 00:05:56.154 "bdev_opal_create", 00:05:56.154 "bdev_nvme_opal_revert", 00:05:56.154 "bdev_nvme_opal_init", 00:05:56.154 "bdev_nvme_send_cmd", 00:05:56.154 "bdev_nvme_get_path_iostat", 00:05:56.154 "bdev_nvme_get_mdns_discovery_info", 00:05:56.154 "bdev_nvme_stop_mdns_discovery", 00:05:56.154 "bdev_nvme_start_mdns_discovery", 00:05:56.154 "bdev_nvme_set_multipath_policy", 00:05:56.154 "bdev_nvme_set_preferred_path", 00:05:56.154 "bdev_nvme_get_io_paths", 00:05:56.154 "bdev_nvme_remove_error_injection", 00:05:56.154 "bdev_nvme_add_error_injection", 00:05:56.154 "bdev_nvme_get_discovery_info", 00:05:56.154 "bdev_nvme_stop_discovery", 00:05:56.154 "bdev_nvme_start_discovery", 00:05:56.154 "bdev_nvme_get_controller_health_info", 00:05:56.154 "bdev_nvme_disable_controller", 00:05:56.154 "bdev_nvme_enable_controller", 00:05:56.154 "bdev_nvme_reset_controller", 00:05:56.154 "bdev_nvme_get_transport_statistics", 00:05:56.154 "bdev_nvme_apply_firmware", 00:05:56.154 "bdev_nvme_detach_controller", 00:05:56.154 "bdev_nvme_get_controllers", 00:05:56.154 "bdev_nvme_attach_controller", 00:05:56.154 "bdev_nvme_set_hotplug", 00:05:56.154 "bdev_nvme_set_options", 00:05:56.154 "bdev_passthru_delete", 00:05:56.154 "bdev_passthru_create", 00:05:56.154 "bdev_lvol_grow_lvstore", 00:05:56.154 "bdev_lvol_get_lvols", 00:05:56.154 "bdev_lvol_get_lvstores", 00:05:56.154 "bdev_lvol_delete", 00:05:56.154 "bdev_lvol_set_read_only", 00:05:56.154 "bdev_lvol_resize", 00:05:56.154 "bdev_lvol_decouple_parent", 00:05:56.154 "bdev_lvol_inflate", 00:05:56.154 "bdev_lvol_rename", 00:05:56.154 "bdev_lvol_clone_bdev", 00:05:56.154 "bdev_lvol_clone", 00:05:56.154 "bdev_lvol_snapshot", 00:05:56.154 "bdev_lvol_create", 00:05:56.154 "bdev_lvol_delete_lvstore", 00:05:56.154 "bdev_lvol_rename_lvstore", 00:05:56.154 "bdev_lvol_create_lvstore", 00:05:56.154 "bdev_raid_set_options", 00:05:56.154 "bdev_raid_remove_base_bdev", 00:05:56.154 "bdev_raid_add_base_bdev", 00:05:56.154 "bdev_raid_delete", 00:05:56.154 "bdev_raid_create", 00:05:56.154 "bdev_raid_get_bdevs", 00:05:56.154 "bdev_error_inject_error", 00:05:56.154 "bdev_error_delete", 00:05:56.154 "bdev_error_create", 00:05:56.154 "bdev_split_delete", 00:05:56.154 "bdev_split_create", 00:05:56.154 "bdev_delay_delete", 00:05:56.154 "bdev_delay_create", 00:05:56.154 "bdev_delay_update_latency", 00:05:56.154 "bdev_zone_block_delete", 00:05:56.154 "bdev_zone_block_create", 00:05:56.154 "blobfs_create", 00:05:56.154 "blobfs_detect", 00:05:56.154 "blobfs_set_cache_size", 00:05:56.154 "bdev_aio_delete", 00:05:56.154 "bdev_aio_rescan", 00:05:56.154 "bdev_aio_create", 00:05:56.154 "bdev_ftl_set_property", 00:05:56.154 "bdev_ftl_get_properties", 00:05:56.154 "bdev_ftl_get_stats", 00:05:56.154 "bdev_ftl_unmap", 00:05:56.154 "bdev_ftl_unload", 00:05:56.154 "bdev_ftl_delete", 00:05:56.154 "bdev_ftl_load", 00:05:56.154 "bdev_ftl_create", 00:05:56.154 "bdev_virtio_attach_controller", 00:05:56.154 "bdev_virtio_scsi_get_devices", 00:05:56.154 "bdev_virtio_detach_controller", 00:05:56.154 "bdev_virtio_blk_set_hotplug", 00:05:56.154 "bdev_iscsi_delete", 00:05:56.154 "bdev_iscsi_create", 00:05:56.154 "bdev_iscsi_set_options", 00:05:56.154 "accel_error_inject_error", 00:05:56.154 "ioat_scan_accel_module", 00:05:56.154 "dsa_scan_accel_module", 00:05:56.154 "iaa_scan_accel_module", 00:05:56.154 "iscsi_set_options", 00:05:56.154 "iscsi_get_auth_groups", 00:05:56.154 "iscsi_auth_group_remove_secret", 00:05:56.154 "iscsi_auth_group_add_secret", 00:05:56.154 "iscsi_delete_auth_group", 00:05:56.154 "iscsi_create_auth_group", 00:05:56.154 "iscsi_set_discovery_auth", 00:05:56.154 "iscsi_get_options", 00:05:56.154 "iscsi_target_node_request_logout", 00:05:56.154 "iscsi_target_node_set_redirect", 00:05:56.154 "iscsi_target_node_set_auth", 00:05:56.154 "iscsi_target_node_add_lun", 00:05:56.154 "iscsi_get_connections", 00:05:56.154 "iscsi_portal_group_set_auth", 00:05:56.154 "iscsi_start_portal_group", 00:05:56.154 "iscsi_delete_portal_group", 00:05:56.154 "iscsi_create_portal_group", 00:05:56.154 "iscsi_get_portal_groups", 00:05:56.154 "iscsi_delete_target_node", 00:05:56.154 "iscsi_target_node_remove_pg_ig_maps", 00:05:56.154 "iscsi_target_node_add_pg_ig_maps", 00:05:56.154 "iscsi_create_target_node", 00:05:56.154 "iscsi_get_target_nodes", 00:05:56.154 "iscsi_delete_initiator_group", 00:05:56.154 "iscsi_initiator_group_remove_initiators", 00:05:56.154 "iscsi_initiator_group_add_initiators", 00:05:56.154 "iscsi_create_initiator_group", 00:05:56.154 "iscsi_get_initiator_groups", 00:05:56.154 "nvmf_set_crdt", 00:05:56.154 "nvmf_set_config", 00:05:56.154 "nvmf_set_max_subsystems", 00:05:56.154 "nvmf_subsystem_get_listeners", 00:05:56.154 "nvmf_subsystem_get_qpairs", 00:05:56.154 "nvmf_subsystem_get_controllers", 00:05:56.154 "nvmf_get_stats", 00:05:56.154 "nvmf_get_transports", 00:05:56.154 "nvmf_create_transport", 00:05:56.154 "nvmf_get_targets", 00:05:56.154 "nvmf_delete_target", 00:05:56.154 "nvmf_create_target", 00:05:56.154 "nvmf_subsystem_allow_any_host", 00:05:56.154 "nvmf_subsystem_remove_host", 00:05:56.154 "nvmf_subsystem_add_host", 00:05:56.154 "nvmf_subsystem_remove_ns", 00:05:56.154 "nvmf_subsystem_add_ns", 00:05:56.154 "nvmf_subsystem_listener_set_ana_state", 00:05:56.154 "nvmf_discovery_get_referrals", 00:05:56.154 "nvmf_discovery_remove_referral", 00:05:56.154 "nvmf_discovery_add_referral", 00:05:56.154 "nvmf_subsystem_remove_listener", 00:05:56.154 "nvmf_subsystem_add_listener", 00:05:56.154 "nvmf_delete_subsystem", 00:05:56.154 "nvmf_create_subsystem", 00:05:56.154 "nvmf_get_subsystems", 00:05:56.154 "env_dpdk_get_mem_stats", 00:05:56.154 "nbd_get_disks", 00:05:56.154 "nbd_stop_disk", 00:05:56.154 "nbd_start_disk", 00:05:56.154 "ublk_recover_disk", 00:05:56.154 "ublk_get_disks", 00:05:56.154 "ublk_stop_disk", 00:05:56.154 "ublk_start_disk", 00:05:56.154 "ublk_destroy_target", 00:05:56.154 "ublk_create_target", 00:05:56.154 "virtio_blk_create_transport", 00:05:56.154 "virtio_blk_get_transports", 00:05:56.154 "vhost_controller_set_coalescing", 00:05:56.154 "vhost_get_controllers", 00:05:56.154 "vhost_delete_controller", 00:05:56.154 "vhost_create_blk_controller", 00:05:56.154 "vhost_scsi_controller_remove_target", 00:05:56.154 "vhost_scsi_controller_add_target", 00:05:56.154 "vhost_start_scsi_controller", 00:05:56.154 "vhost_create_scsi_controller", 00:05:56.154 "thread_set_cpumask", 00:05:56.154 "framework_get_scheduler", 00:05:56.154 "framework_set_scheduler", 00:05:56.154 "framework_get_reactors", 00:05:56.154 "thread_get_io_channels", 00:05:56.154 "thread_get_pollers", 00:05:56.154 "thread_get_stats", 00:05:56.154 "framework_monitor_context_switch", 00:05:56.154 "spdk_kill_instance", 00:05:56.154 "log_enable_timestamps", 00:05:56.154 "log_get_flags", 00:05:56.154 "log_clear_flag", 00:05:56.154 "log_set_flag", 00:05:56.154 "log_get_level", 00:05:56.154 "log_set_level", 00:05:56.154 "log_get_print_level", 00:05:56.154 "log_set_print_level", 00:05:56.154 "framework_enable_cpumask_locks", 00:05:56.154 "framework_disable_cpumask_locks", 00:05:56.154 "framework_wait_init", 00:05:56.154 "framework_start_init", 00:05:56.154 "scsi_get_devices", 00:05:56.154 "bdev_get_histogram", 00:05:56.154 "bdev_enable_histogram", 00:05:56.154 "bdev_set_qos_limit", 00:05:56.154 "bdev_set_qd_sampling_period", 00:05:56.154 "bdev_get_bdevs", 00:05:56.154 "bdev_reset_iostat", 00:05:56.154 "bdev_get_iostat", 00:05:56.154 "bdev_examine", 00:05:56.154 "bdev_wait_for_examine", 00:05:56.154 "bdev_set_options", 00:05:56.154 "notify_get_notifications", 00:05:56.154 "notify_get_types", 00:05:56.154 "accel_get_stats", 00:05:56.154 "accel_set_options", 00:05:56.154 "accel_set_driver", 00:05:56.154 "accel_crypto_key_destroy", 00:05:56.154 "accel_crypto_keys_get", 00:05:56.154 "accel_crypto_key_create", 00:05:56.154 "accel_assign_opc", 00:05:56.154 "accel_get_module_info", 00:05:56.154 "accel_get_opc_assignments", 00:05:56.154 "vmd_rescan", 00:05:56.154 "vmd_remove_device", 00:05:56.154 "vmd_enable", 00:05:56.154 "sock_set_default_impl", 00:05:56.154 "sock_impl_set_options", 00:05:56.154 "sock_impl_get_options", 00:05:56.154 "iobuf_get_stats", 00:05:56.154 "iobuf_set_options", 00:05:56.154 "framework_get_pci_devices", 00:05:56.155 "framework_get_config", 00:05:56.155 "framework_get_subsystems", 00:05:56.155 "trace_get_info", 00:05:56.155 "trace_get_tpoint_group_mask", 00:05:56.155 "trace_disable_tpoint_group", 00:05:56.155 "trace_enable_tpoint_group", 00:05:56.155 "trace_clear_tpoint_mask", 00:05:56.155 "trace_set_tpoint_mask", 00:05:56.155 "spdk_get_version", 00:05:56.155 "rpc_get_methods" 00:05:56.155 ] 00:05:56.155 23:05:01 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.155 23:05:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:56.155 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.155 23:05:01 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.155 23:05:01 -- spdkcli/tcp.sh@38 -- # killprocess 448963 00:05:56.155 23:05:01 -- common/autotest_common.sh@926 -- # '[' -z 448963 ']' 00:05:56.155 23:05:01 -- common/autotest_common.sh@930 -- # kill -0 448963 00:05:56.155 23:05:01 -- common/autotest_common.sh@931 -- # uname 00:05:56.155 23:05:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.155 23:05:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 448963 00:05:56.155 23:05:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.155 23:05:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.155 23:05:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 448963' 00:05:56.155 killing process with pid 448963 00:05:56.155 23:05:01 -- common/autotest_common.sh@945 -- # kill 448963 00:05:56.155 23:05:01 -- common/autotest_common.sh@950 -- # wait 448963 00:05:56.414 00:05:56.414 real 0m1.603s 00:05:56.414 user 0m2.973s 00:05:56.414 sys 0m0.490s 00:05:56.414 23:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.414 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.414 ************************************ 00:05:56.414 END TEST spdkcli_tcp 00:05:56.414 ************************************ 00:05:56.673 23:05:02 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.673 23:05:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.673 23:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.673 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.673 ************************************ 00:05:56.673 START TEST dpdk_mem_utility 00:05:56.673 ************************************ 00:05:56.673 23:05:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.673 * Looking for test storage... 00:05:56.673 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:56.673 23:05:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:56.673 23:05:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=449305 00:05:56.673 23:05:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 449305 00:05:56.673 23:05:02 -- common/autotest_common.sh@819 -- # '[' -z 449305 ']' 00:05:56.673 23:05:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.673 23:05:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.673 23:05:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.673 23:05:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.673 23:05:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.673 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.673 [2024-11-02 23:05:02.341005] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:56.673 [2024-11-02 23:05:02.341058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449305 ] 00:05:56.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.673 [2024-11-02 23:05:02.410251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.933 [2024-11-02 23:05:02.484360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.933 [2024-11-02 23:05:02.484476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.500 23:05:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.500 23:05:03 -- common/autotest_common.sh@852 -- # return 0 00:05:57.500 23:05:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:57.500 23:05:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:57.501 23:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.501 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.501 { 00:05:57.501 "filename": "/tmp/spdk_mem_dump.txt" 00:05:57.501 } 00:05:57.501 23:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.501 23:05:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:57.501 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:57.501 1 heaps totaling size 814.000000 MiB 00:05:57.501 size: 814.000000 MiB heap id: 0 00:05:57.501 end heaps---------- 00:05:57.501 8 mempools totaling size 598.116089 MiB 00:05:57.501 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:57.501 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:57.501 size: 84.521057 MiB name: bdev_io_449305 00:05:57.501 size: 51.011292 MiB name: evtpool_449305 00:05:57.501 size: 50.003479 MiB name: msgpool_449305 00:05:57.501 size: 21.763794 MiB name: PDU_Pool 00:05:57.501 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:57.501 size: 0.026123 MiB name: Session_Pool 00:05:57.501 end mempools------- 00:05:57.501 6 memzones totaling size 4.142822 MiB 00:05:57.501 size: 1.000366 MiB name: RG_ring_0_449305 00:05:57.501 size: 1.000366 MiB name: RG_ring_1_449305 00:05:57.501 size: 1.000366 MiB name: RG_ring_4_449305 00:05:57.501 size: 1.000366 MiB name: RG_ring_5_449305 00:05:57.501 size: 0.125366 MiB name: RG_ring_2_449305 00:05:57.501 size: 0.015991 MiB name: RG_ring_3_449305 00:05:57.501 end memzones------- 00:05:57.501 23:05:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:57.760 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:57.760 list of free elements. size: 12.519348 MiB 00:05:57.760 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:57.760 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:57.760 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:57.760 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:57.760 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:57.760 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:57.760 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:57.760 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:57.760 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:57.760 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:57.760 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:57.760 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:57.760 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:57.760 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:57.760 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:57.760 list of standard malloc elements. size: 199.218079 MiB 00:05:57.760 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:57.760 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:57.760 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:57.760 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:57.760 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:57.760 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:57.760 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:57.760 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:57.760 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:57.760 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:57.760 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:57.760 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:57.760 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:57.760 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:57.760 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:57.760 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:57.760 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:57.760 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:57.760 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:57.760 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:57.760 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:57.760 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:57.760 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:57.760 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:57.760 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:57.761 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:57.761 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:57.761 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:57.761 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:57.761 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:57.761 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:57.761 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:57.761 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:57.761 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:57.761 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:57.761 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:57.761 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:57.761 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:57.761 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:57.761 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:57.761 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:57.761 list of memzone associated elements. size: 602.262573 MiB 00:05:57.761 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:57.761 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:57.761 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:57.761 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:57.761 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:57.761 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_449305_0 00:05:57.761 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:57.761 associated memzone info: size: 48.002930 MiB name: MP_evtpool_449305_0 00:05:57.761 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:57.761 associated memzone info: size: 48.002930 MiB name: MP_msgpool_449305_0 00:05:57.761 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:57.761 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:57.761 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:57.761 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:57.761 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:57.761 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_449305 00:05:57.761 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:57.761 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_449305 00:05:57.761 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:57.761 associated memzone info: size: 1.007996 MiB name: MP_evtpool_449305 00:05:57.761 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:57.761 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:57.761 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:57.761 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:57.761 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:57.761 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:57.761 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:57.761 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:57.761 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:57.761 associated memzone info: size: 1.000366 MiB name: RG_ring_0_449305 00:05:57.761 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:57.761 associated memzone info: size: 1.000366 MiB name: RG_ring_1_449305 00:05:57.761 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:57.761 associated memzone info: size: 1.000366 MiB name: RG_ring_4_449305 00:05:57.761 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:57.761 associated memzone info: size: 1.000366 MiB name: RG_ring_5_449305 00:05:57.761 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:57.761 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_449305 00:05:57.761 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:57.761 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:57.761 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:57.761 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:57.761 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:57.761 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:57.761 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:57.761 associated memzone info: size: 0.125366 MiB name: RG_ring_2_449305 00:05:57.761 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:57.761 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:57.761 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:57.761 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:57.761 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:57.761 associated memzone info: size: 0.015991 MiB name: RG_ring_3_449305 00:05:57.761 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:57.761 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:57.761 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:57.761 associated memzone info: size: 0.000183 MiB name: MP_msgpool_449305 00:05:57.761 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:57.761 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_449305 00:05:57.761 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:57.761 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:57.761 23:05:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:57.761 23:05:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 449305 00:05:57.761 23:05:03 -- common/autotest_common.sh@926 -- # '[' -z 449305 ']' 00:05:57.761 23:05:03 -- common/autotest_common.sh@930 -- # kill -0 449305 00:05:57.761 23:05:03 -- common/autotest_common.sh@931 -- # uname 00:05:57.761 23:05:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.761 23:05:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 449305 00:05:57.761 23:05:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.761 23:05:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.761 23:05:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 449305' 00:05:57.761 killing process with pid 449305 00:05:57.761 23:05:03 -- common/autotest_common.sh@945 -- # kill 449305 00:05:57.761 23:05:03 -- common/autotest_common.sh@950 -- # wait 449305 00:05:58.021 00:05:58.021 real 0m1.458s 00:05:58.021 user 0m1.549s 00:05:58.021 sys 0m0.417s 00:05:58.021 23:05:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.021 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.021 ************************************ 00:05:58.021 END TEST dpdk_mem_utility 00:05:58.021 ************************************ 00:05:58.021 23:05:03 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:58.021 23:05:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.021 23:05:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.021 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.021 ************************************ 00:05:58.021 START TEST event 00:05:58.021 ************************************ 00:05:58.021 23:05:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:58.331 * Looking for test storage... 00:05:58.331 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:58.331 23:05:03 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:58.331 23:05:03 -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.331 23:05:03 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.331 23:05:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:58.331 23:05:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.331 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.331 ************************************ 00:05:58.331 START TEST event_perf 00:05:58.331 ************************************ 00:05:58.331 23:05:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.331 Running I/O for 1 seconds...[2024-11-02 23:05:03.833306] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:58.331 [2024-11-02 23:05:03.833392] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449626 ] 00:05:58.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.331 [2024-11-02 23:05:03.904368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.331 [2024-11-02 23:05:03.975305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.331 [2024-11-02 23:05:03.975411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.331 [2024-11-02 23:05:03.975520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.331 [2024-11-02 23:05:03.975529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.300 Running I/O for 1 seconds... 00:05:59.300 lcore 0: 212698 00:05:59.300 lcore 1: 212696 00:05:59.300 lcore 2: 212697 00:05:59.300 lcore 3: 212698 00:05:59.300 done. 00:05:59.559 00:05:59.559 real 0m1.248s 00:05:59.559 user 0m4.164s 00:05:59.559 sys 0m0.081s 00:05:59.559 23:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.559 23:05:05 -- common/autotest_common.sh@10 -- # set +x 00:05:59.559 ************************************ 00:05:59.559 END TEST event_perf 00:05:59.559 ************************************ 00:05:59.559 23:05:05 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:59.559 23:05:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:59.559 23:05:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.559 23:05:05 -- common/autotest_common.sh@10 -- # set +x 00:05:59.559 ************************************ 00:05:59.559 START TEST event_reactor 00:05:59.559 ************************************ 00:05:59.559 23:05:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:59.559 [2024-11-02 23:05:05.132742] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:59.559 [2024-11-02 23:05:05.132828] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449923 ] 00:05:59.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.559 [2024-11-02 23:05:05.206378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.559 [2024-11-02 23:05:05.271585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.937 test_start 00:06:00.937 oneshot 00:06:00.937 tick 100 00:06:00.937 tick 100 00:06:00.937 tick 250 00:06:00.937 tick 100 00:06:00.937 tick 100 00:06:00.937 tick 100 00:06:00.937 tick 250 00:06:00.937 tick 500 00:06:00.937 tick 100 00:06:00.937 tick 100 00:06:00.937 tick 250 00:06:00.937 tick 100 00:06:00.937 tick 100 00:06:00.937 test_end 00:06:00.937 00:06:00.937 real 0m1.247s 00:06:00.937 user 0m1.154s 00:06:00.937 sys 0m0.089s 00:06:00.937 23:05:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.937 23:05:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.937 ************************************ 00:06:00.937 END TEST event_reactor 00:06:00.937 ************************************ 00:06:00.937 23:05:06 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.937 23:05:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:00.937 23:05:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.937 23:05:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.937 ************************************ 00:06:00.937 START TEST event_reactor_perf 00:06:00.937 ************************************ 00:06:00.937 23:05:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.937 [2024-11-02 23:05:06.428478] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:00.937 [2024-11-02 23:05:06.428571] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450209 ] 00:06:00.937 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.937 [2024-11-02 23:05:06.500870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.937 [2024-11-02 23:05:06.566576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.315 test_start 00:06:02.315 test_end 00:06:02.315 Performance: 522972 events per second 00:06:02.315 00:06:02.315 real 0m1.248s 00:06:02.315 user 0m1.158s 00:06:02.315 sys 0m0.087s 00:06:02.315 23:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.315 23:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 ************************************ 00:06:02.315 END TEST event_reactor_perf 00:06:02.315 ************************************ 00:06:02.315 23:05:07 -- event/event.sh@49 -- # uname -s 00:06:02.315 23:05:07 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:02.315 23:05:07 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:02.315 23:05:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.315 23:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.315 23:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 ************************************ 00:06:02.315 START TEST event_scheduler 00:06:02.315 ************************************ 00:06:02.315 23:05:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:02.315 * Looking for test storage... 00:06:02.315 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:02.315 23:05:07 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:02.315 23:05:07 -- scheduler/scheduler.sh@35 -- # scheduler_pid=450519 00:06:02.315 23:05:07 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.315 23:05:07 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:02.315 23:05:07 -- scheduler/scheduler.sh@37 -- # waitforlisten 450519 00:06:02.315 23:05:07 -- common/autotest_common.sh@819 -- # '[' -z 450519 ']' 00:06:02.315 23:05:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.315 23:05:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.315 23:05:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.315 23:05:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.315 23:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 [2024-11-02 23:05:07.854149] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:02.315 [2024-11-02 23:05:07.854204] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450519 ] 00:06:02.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.315 [2024-11-02 23:05:07.921459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.315 [2024-11-02 23:05:07.995879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.315 [2024-11-02 23:05:07.995900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.315 [2024-11-02 23:05:07.995988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.315 [2024-11-02 23:05:07.995991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.252 23:05:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.252 23:05:08 -- common/autotest_common.sh@852 -- # return 0 00:06:03.252 23:05:08 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:03.252 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.252 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.252 POWER: Env isn't set yet! 00:06:03.252 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:03.252 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.252 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.252 POWER: Attempting to initialise PSTAT power management... 00:06:03.252 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:03.252 POWER: Initialized successfully for lcore 0 power management 00:06:03.252 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:03.252 POWER: Initialized successfully for lcore 1 power management 00:06:03.252 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:03.252 POWER: Initialized successfully for lcore 2 power management 00:06:03.252 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:03.252 POWER: Initialized successfully for lcore 3 power management 00:06:03.252 [2024-11-02 23:05:08.712211] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:03.252 [2024-11-02 23:05:08.712226] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:03.252 [2024-11-02 23:05:08.712237] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:03.252 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.252 23:05:08 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:03.252 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.252 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.252 [2024-11-02 23:05:08.779738] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:03.252 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.252 23:05:08 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:03.252 23:05:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.252 23:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.252 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.252 ************************************ 00:06:03.252 START TEST scheduler_create_thread 00:06:03.252 ************************************ 00:06:03.252 23:05:08 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:03.252 23:05:08 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:03.252 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.252 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.252 2 00:06:03.252 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.252 23:05:08 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:03.252 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.252 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.252 3 00:06:03.252 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.252 23:05:08 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 4 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 5 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 6 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 7 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 8 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 9 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 10 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.253 23:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:03.253 23:05:08 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:03.253 23:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.253 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:04.189 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.189 23:05:09 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:04.189 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.189 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.565 23:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.565 23:05:11 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:05.565 23:05:11 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:05.565 23:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.566 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:06:06.502 23:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:06.502 00:06:06.502 real 0m3.382s 00:06:06.502 user 0m0.022s 00:06:06.502 sys 0m0.008s 00:06:06.502 23:05:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.502 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:06.502 ************************************ 00:06:06.502 END TEST scheduler_create_thread 00:06:06.502 ************************************ 00:06:06.502 23:05:12 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:06.502 23:05:12 -- scheduler/scheduler.sh@46 -- # killprocess 450519 00:06:06.502 23:05:12 -- common/autotest_common.sh@926 -- # '[' -z 450519 ']' 00:06:06.502 23:05:12 -- common/autotest_common.sh@930 -- # kill -0 450519 00:06:06.502 23:05:12 -- common/autotest_common.sh@931 -- # uname 00:06:06.502 23:05:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.502 23:05:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 450519 00:06:06.761 23:05:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:06.761 23:05:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:06.761 23:05:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 450519' 00:06:06.761 killing process with pid 450519 00:06:06.761 23:05:12 -- common/autotest_common.sh@945 -- # kill 450519 00:06:06.761 23:05:12 -- common/autotest_common.sh@950 -- # wait 450519 00:06:07.020 [2024-11-02 23:05:12.551747] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.020 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:07.020 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:07.020 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:07.020 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:07.020 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:07.020 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:07.020 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:07.020 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:07.279 00:06:07.279 real 0m5.093s 00:06:07.279 user 0m10.456s 00:06:07.279 sys 0m0.413s 00:06:07.279 23:05:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.279 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.279 ************************************ 00:06:07.279 END TEST event_scheduler 00:06:07.279 ************************************ 00:06:07.279 23:05:12 -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.279 23:05:12 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.279 23:05:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.279 23:05:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.279 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.279 ************************************ 00:06:07.279 START TEST app_repeat 00:06:07.279 ************************************ 00:06:07.279 23:05:12 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:07.279 23:05:12 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.279 23:05:12 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.279 23:05:12 -- event/event.sh@13 -- # local nbd_list 00:06:07.279 23:05:12 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.279 23:05:12 -- event/event.sh@14 -- # local bdev_list 00:06:07.279 23:05:12 -- event/event.sh@15 -- # local repeat_times=4 00:06:07.279 23:05:12 -- event/event.sh@17 -- # modprobe nbd 00:06:07.279 23:05:12 -- event/event.sh@19 -- # repeat_pid=451382 00:06:07.279 23:05:12 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.279 23:05:12 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.279 23:05:12 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 451382' 00:06:07.279 Process app_repeat pid: 451382 00:06:07.279 23:05:12 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.279 23:05:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.279 spdk_app_start Round 0 00:06:07.279 23:05:12 -- event/event.sh@25 -- # waitforlisten 451382 /var/tmp/spdk-nbd.sock 00:06:07.279 23:05:12 -- common/autotest_common.sh@819 -- # '[' -z 451382 ']' 00:06:07.279 23:05:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.279 23:05:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.279 23:05:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.279 23:05:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.279 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.279 [2024-11-02 23:05:12.896896] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:07.279 [2024-11-02 23:05:12.896977] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451382 ] 00:06:07.279 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.279 [2024-11-02 23:05:12.967212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.279 [2024-11-02 23:05:13.032483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.279 [2024-11-02 23:05:13.032485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.217 23:05:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.217 23:05:13 -- common/autotest_common.sh@852 -- # return 0 00:06:08.217 23:05:13 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.217 Malloc0 00:06:08.217 23:05:13 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.476 Malloc1 00:06:08.476 23:05:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.476 23:05:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.735 /dev/nbd0 00:06:08.735 23:05:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.735 23:05:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.735 23:05:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:08.735 23:05:14 -- common/autotest_common.sh@857 -- # local i 00:06:08.735 23:05:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.735 23:05:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.735 23:05:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:08.735 23:05:14 -- common/autotest_common.sh@861 -- # break 00:06:08.735 23:05:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.735 23:05:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.735 23:05:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.735 1+0 records in 00:06:08.735 1+0 records out 00:06:08.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241653 s, 16.9 MB/s 00:06:08.735 23:05:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.735 23:05:14 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.735 23:05:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.735 23:05:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.735 23:05:14 -- common/autotest_common.sh@877 -- # return 0 00:06:08.735 23:05:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.735 23:05:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.735 23:05:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.994 /dev/nbd1 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.994 23:05:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:08.994 23:05:14 -- common/autotest_common.sh@857 -- # local i 00:06:08.994 23:05:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.994 23:05:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.994 23:05:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:08.994 23:05:14 -- common/autotest_common.sh@861 -- # break 00:06:08.994 23:05:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.994 23:05:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.994 23:05:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.994 1+0 records in 00:06:08.994 1+0 records out 00:06:08.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267964 s, 15.3 MB/s 00:06:08.994 23:05:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.994 23:05:14 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.994 23:05:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.994 23:05:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.994 23:05:14 -- common/autotest_common.sh@877 -- # return 0 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.994 { 00:06:08.994 "nbd_device": "/dev/nbd0", 00:06:08.994 "bdev_name": "Malloc0" 00:06:08.994 }, 00:06:08.994 { 00:06:08.994 "nbd_device": "/dev/nbd1", 00:06:08.994 "bdev_name": "Malloc1" 00:06:08.994 } 00:06:08.994 ]' 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.994 { 00:06:08.994 "nbd_device": "/dev/nbd0", 00:06:08.994 "bdev_name": "Malloc0" 00:06:08.994 }, 00:06:08.994 { 00:06:08.994 "nbd_device": "/dev/nbd1", 00:06:08.994 "bdev_name": "Malloc1" 00:06:08.994 } 00:06:08.994 ]' 00:06:08.994 23:05:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.253 /dev/nbd1' 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.253 /dev/nbd1' 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.253 256+0 records in 00:06:09.253 256+0 records out 00:06:09.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107423 s, 97.6 MB/s 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.253 23:05:14 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.253 256+0 records in 00:06:09.254 256+0 records out 00:06:09.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192066 s, 54.6 MB/s 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.254 256+0 records in 00:06:09.254 256+0 records out 00:06:09.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201729 s, 52.0 MB/s 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.254 23:05:14 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@41 -- # break 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@41 -- # break 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.513 23:05:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@65 -- # true 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.771 23:05:15 -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.771 23:05:15 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.030 23:05:15 -- event/event.sh@35 -- # sleep 3 00:06:10.289 [2024-11-02 23:05:15.890423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.289 [2024-11-02 23:05:15.950878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.289 [2024-11-02 23:05:15.950880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.289 [2024-11-02 23:05:15.991996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.289 [2024-11-02 23:05:15.992041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.578 23:05:18 -- event/event.sh@23 -- # for i in {0..2} 00:06:13.578 23:05:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.578 spdk_app_start Round 1 00:06:13.578 23:05:18 -- event/event.sh@25 -- # waitforlisten 451382 /var/tmp/spdk-nbd.sock 00:06:13.578 23:05:18 -- common/autotest_common.sh@819 -- # '[' -z 451382 ']' 00:06:13.578 23:05:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.578 23:05:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.578 23:05:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.578 23:05:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.578 23:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.578 23:05:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.578 23:05:18 -- common/autotest_common.sh@852 -- # return 0 00:06:13.578 23:05:18 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.578 Malloc0 00:06:13.578 23:05:19 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.578 Malloc1 00:06:13.578 23:05:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@12 -- # local i 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.578 23:05:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.837 /dev/nbd0 00:06:13.837 23:05:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.837 23:05:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.837 23:05:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:13.837 23:05:19 -- common/autotest_common.sh@857 -- # local i 00:06:13.837 23:05:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:13.837 23:05:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:13.837 23:05:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:13.837 23:05:19 -- common/autotest_common.sh@861 -- # break 00:06:13.837 23:05:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:13.837 23:05:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:13.837 23:05:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.837 1+0 records in 00:06:13.837 1+0 records out 00:06:13.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214386 s, 19.1 MB/s 00:06:13.837 23:05:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.837 23:05:19 -- common/autotest_common.sh@874 -- # size=4096 00:06:13.837 23:05:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.837 23:05:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:13.837 23:05:19 -- common/autotest_common.sh@877 -- # return 0 00:06:13.837 23:05:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.837 23:05:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.837 23:05:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.095 /dev/nbd1 00:06:14.095 23:05:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.095 23:05:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.095 23:05:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:14.095 23:05:19 -- common/autotest_common.sh@857 -- # local i 00:06:14.095 23:05:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.095 23:05:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.095 23:05:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:14.095 23:05:19 -- common/autotest_common.sh@861 -- # break 00:06:14.095 23:05:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.095 23:05:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.095 23:05:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.095 1+0 records in 00:06:14.095 1+0 records out 00:06:14.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026693 s, 15.3 MB/s 00:06:14.095 23:05:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:14.095 23:05:19 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.095 23:05:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:14.095 23:05:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.095 23:05:19 -- common/autotest_common.sh@877 -- # return 0 00:06:14.095 23:05:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.095 23:05:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.095 23:05:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.095 23:05:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.095 23:05:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.354 23:05:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.354 { 00:06:14.354 "nbd_device": "/dev/nbd0", 00:06:14.354 "bdev_name": "Malloc0" 00:06:14.354 }, 00:06:14.354 { 00:06:14.354 "nbd_device": "/dev/nbd1", 00:06:14.354 "bdev_name": "Malloc1" 00:06:14.354 } 00:06:14.354 ]' 00:06:14.354 23:05:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.354 { 00:06:14.354 "nbd_device": "/dev/nbd0", 00:06:14.355 "bdev_name": "Malloc0" 00:06:14.355 }, 00:06:14.355 { 00:06:14.355 "nbd_device": "/dev/nbd1", 00:06:14.355 "bdev_name": "Malloc1" 00:06:14.355 } 00:06:14.355 ]' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.355 /dev/nbd1' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.355 /dev/nbd1' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.355 256+0 records in 00:06:14.355 256+0 records out 00:06:14.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111943 s, 93.7 MB/s 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.355 256+0 records in 00:06:14.355 256+0 records out 00:06:14.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194456 s, 53.9 MB/s 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.355 256+0 records in 00:06:14.355 256+0 records out 00:06:14.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195823 s, 53.5 MB/s 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.355 23:05:19 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@41 -- # break 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.614 23:05:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@41 -- # break 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@65 -- # true 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.874 23:05:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.874 23:05:20 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.133 23:05:20 -- event/event.sh@35 -- # sleep 3 00:06:15.392 [2024-11-02 23:05:21.003507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.392 [2024-11-02 23:05:21.064334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.392 [2024-11-02 23:05:21.064335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.392 [2024-11-02 23:05:21.105508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.392 [2024-11-02 23:05:21.105555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.682 23:05:23 -- event/event.sh@23 -- # for i in {0..2} 00:06:18.682 23:05:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:18.682 spdk_app_start Round 2 00:06:18.682 23:05:23 -- event/event.sh@25 -- # waitforlisten 451382 /var/tmp/spdk-nbd.sock 00:06:18.682 23:05:23 -- common/autotest_common.sh@819 -- # '[' -z 451382 ']' 00:06:18.682 23:05:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.682 23:05:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.682 23:05:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.682 23:05:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.682 23:05:23 -- common/autotest_common.sh@10 -- # set +x 00:06:18.682 23:05:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.682 23:05:23 -- common/autotest_common.sh@852 -- # return 0 00:06:18.682 23:05:23 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.682 Malloc0 00:06:18.682 23:05:24 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.682 Malloc1 00:06:18.682 23:05:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@12 -- # local i 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.682 23:05:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.941 /dev/nbd0 00:06:18.941 23:05:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.941 23:05:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.941 23:05:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:18.941 23:05:24 -- common/autotest_common.sh@857 -- # local i 00:06:18.941 23:05:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:18.941 23:05:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:18.941 23:05:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:18.941 23:05:24 -- common/autotest_common.sh@861 -- # break 00:06:18.941 23:05:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:18.941 23:05:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:18.941 23:05:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.941 1+0 records in 00:06:18.942 1+0 records out 00:06:18.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244311 s, 16.8 MB/s 00:06:18.942 23:05:24 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:18.942 23:05:24 -- common/autotest_common.sh@874 -- # size=4096 00:06:18.942 23:05:24 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:18.942 23:05:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:18.942 23:05:24 -- common/autotest_common.sh@877 -- # return 0 00:06:18.942 23:05:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.942 23:05:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.942 23:05:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.201 /dev/nbd1 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.201 23:05:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:19.201 23:05:24 -- common/autotest_common.sh@857 -- # local i 00:06:19.201 23:05:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.201 23:05:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.201 23:05:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:19.201 23:05:24 -- common/autotest_common.sh@861 -- # break 00:06:19.201 23:05:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.201 23:05:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.201 23:05:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.201 1+0 records in 00:06:19.201 1+0 records out 00:06:19.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277498 s, 14.8 MB/s 00:06:19.201 23:05:24 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.201 23:05:24 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.201 23:05:24 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.201 23:05:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.201 23:05:24 -- common/autotest_common.sh@877 -- # return 0 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.201 23:05:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.201 { 00:06:19.201 "nbd_device": "/dev/nbd0", 00:06:19.201 "bdev_name": "Malloc0" 00:06:19.201 }, 00:06:19.201 { 00:06:19.201 "nbd_device": "/dev/nbd1", 00:06:19.201 "bdev_name": "Malloc1" 00:06:19.201 } 00:06:19.201 ]' 00:06:19.460 23:05:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.460 23:05:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.460 { 00:06:19.460 "nbd_device": "/dev/nbd0", 00:06:19.460 "bdev_name": "Malloc0" 00:06:19.460 }, 00:06:19.460 { 00:06:19.460 "nbd_device": "/dev/nbd1", 00:06:19.460 "bdev_name": "Malloc1" 00:06:19.460 } 00:06:19.460 ]' 00:06:19.460 23:05:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.460 /dev/nbd1' 00:06:19.460 23:05:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.460 /dev/nbd1' 00:06:19.460 23:05:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.460 256+0 records in 00:06:19.460 256+0 records out 00:06:19.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106734 s, 98.2 MB/s 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.460 256+0 records in 00:06:19.460 256+0 records out 00:06:19.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192743 s, 54.4 MB/s 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.460 256+0 records in 00:06:19.460 256+0 records out 00:06:19.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203048 s, 51.6 MB/s 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.460 23:05:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@51 -- # local i 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.461 23:05:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@41 -- # break 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.720 23:05:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@41 -- # break 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@65 -- # true 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.979 23:05:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.238 23:05:25 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.238 23:05:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.238 23:05:25 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.238 23:05:25 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.238 23:05:25 -- event/event.sh@35 -- # sleep 3 00:06:20.497 [2024-11-02 23:05:26.141850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.497 [2024-11-02 23:05:26.202808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.497 [2024-11-02 23:05:26.202810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.497 [2024-11-02 23:05:26.243977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.497 [2024-11-02 23:05:26.244021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.788 23:05:28 -- event/event.sh@38 -- # waitforlisten 451382 /var/tmp/spdk-nbd.sock 00:06:23.788 23:05:28 -- common/autotest_common.sh@819 -- # '[' -z 451382 ']' 00:06:23.788 23:05:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.788 23:05:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.788 23:05:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.788 23:05:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.788 23:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.788 23:05:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.788 23:05:29 -- common/autotest_common.sh@852 -- # return 0 00:06:23.788 23:05:29 -- event/event.sh@39 -- # killprocess 451382 00:06:23.788 23:05:29 -- common/autotest_common.sh@926 -- # '[' -z 451382 ']' 00:06:23.788 23:05:29 -- common/autotest_common.sh@930 -- # kill -0 451382 00:06:23.788 23:05:29 -- common/autotest_common.sh@931 -- # uname 00:06:23.788 23:05:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.788 23:05:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 451382 00:06:23.788 23:05:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.788 23:05:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.788 23:05:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 451382' 00:06:23.788 killing process with pid 451382 00:06:23.788 23:05:29 -- common/autotest_common.sh@945 -- # kill 451382 00:06:23.788 23:05:29 -- common/autotest_common.sh@950 -- # wait 451382 00:06:23.788 spdk_app_start is called in Round 0. 00:06:23.788 Shutdown signal received, stop current app iteration 00:06:23.788 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:23.788 spdk_app_start is called in Round 1. 00:06:23.788 Shutdown signal received, stop current app iteration 00:06:23.788 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:23.788 spdk_app_start is called in Round 2. 00:06:23.788 Shutdown signal received, stop current app iteration 00:06:23.788 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:23.788 spdk_app_start is called in Round 3. 00:06:23.788 Shutdown signal received, stop current app iteration 00:06:23.788 23:05:29 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:23.788 23:05:29 -- event/event.sh@42 -- # return 0 00:06:23.788 00:06:23.788 real 0m16.513s 00:06:23.788 user 0m35.326s 00:06:23.788 sys 0m2.881s 00:06:23.788 23:05:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.788 23:05:29 -- common/autotest_common.sh@10 -- # set +x 00:06:23.788 ************************************ 00:06:23.788 END TEST app_repeat 00:06:23.788 ************************************ 00:06:23.788 23:05:29 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:23.788 23:05:29 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:23.788 23:05:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.788 23:05:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.788 23:05:29 -- common/autotest_common.sh@10 -- # set +x 00:06:23.788 ************************************ 00:06:23.788 START TEST cpu_locks 00:06:23.788 ************************************ 00:06:23.788 23:05:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:23.788 * Looking for test storage... 00:06:23.788 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:23.788 23:05:29 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:23.788 23:05:29 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:23.788 23:05:29 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:23.788 23:05:29 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:23.788 23:05:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.788 23:05:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.788 23:05:29 -- common/autotest_common.sh@10 -- # set +x 00:06:23.788 ************************************ 00:06:23.788 START TEST default_locks 00:06:23.788 ************************************ 00:06:23.788 23:05:29 -- common/autotest_common.sh@1104 -- # default_locks 00:06:23.788 23:05:29 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=454575 00:06:23.788 23:05:29 -- event/cpu_locks.sh@47 -- # waitforlisten 454575 00:06:23.788 23:05:29 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.788 23:05:29 -- common/autotest_common.sh@819 -- # '[' -z 454575 ']' 00:06:23.788 23:05:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.788 23:05:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.788 23:05:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.788 23:05:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.788 23:05:29 -- common/autotest_common.sh@10 -- # set +x 00:06:24.054 [2024-11-02 23:05:29.577713] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:24.054 [2024-11-02 23:05:29.577768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454575 ] 00:06:24.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.055 [2024-11-02 23:05:29.646214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.055 [2024-11-02 23:05:29.716825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.055 [2024-11-02 23:05:29.716953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.626 23:05:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.626 23:05:30 -- common/autotest_common.sh@852 -- # return 0 00:06:24.626 23:05:30 -- event/cpu_locks.sh@49 -- # locks_exist 454575 00:06:24.626 23:05:30 -- event/cpu_locks.sh@22 -- # lslocks -p 454575 00:06:24.626 23:05:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.194 lslocks: write error 00:06:25.194 23:05:30 -- event/cpu_locks.sh@50 -- # killprocess 454575 00:06:25.194 23:05:30 -- common/autotest_common.sh@926 -- # '[' -z 454575 ']' 00:06:25.194 23:05:30 -- common/autotest_common.sh@930 -- # kill -0 454575 00:06:25.194 23:05:30 -- common/autotest_common.sh@931 -- # uname 00:06:25.194 23:05:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.194 23:05:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 454575 00:06:25.194 23:05:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.194 23:05:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.194 23:05:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 454575' 00:06:25.194 killing process with pid 454575 00:06:25.194 23:05:30 -- common/autotest_common.sh@945 -- # kill 454575 00:06:25.194 23:05:30 -- common/autotest_common.sh@950 -- # wait 454575 00:06:25.763 23:05:31 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 454575 00:06:25.763 23:05:31 -- common/autotest_common.sh@640 -- # local es=0 00:06:25.763 23:05:31 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 454575 00:06:25.763 23:05:31 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:25.763 23:05:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.763 23:05:31 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:25.763 23:05:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.763 23:05:31 -- common/autotest_common.sh@643 -- # waitforlisten 454575 00:06:25.763 23:05:31 -- common/autotest_common.sh@819 -- # '[' -z 454575 ']' 00:06:25.763 23:05:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.763 23:05:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.763 23:05:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.763 23:05:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.763 23:05:31 -- common/autotest_common.sh@10 -- # set +x 00:06:25.763 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (454575) - No such process 00:06:25.763 ERROR: process (pid: 454575) is no longer running 00:06:25.763 23:05:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.763 23:05:31 -- common/autotest_common.sh@852 -- # return 1 00:06:25.763 23:05:31 -- common/autotest_common.sh@643 -- # es=1 00:06:25.763 23:05:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:25.763 23:05:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:25.763 23:05:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:25.763 23:05:31 -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.763 23:05:31 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.763 23:05:31 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.763 23:05:31 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.763 00:06:25.763 real 0m1.738s 00:06:25.763 user 0m1.823s 00:06:25.763 sys 0m0.599s 00:06:25.763 23:05:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.763 23:05:31 -- common/autotest_common.sh@10 -- # set +x 00:06:25.763 ************************************ 00:06:25.763 END TEST default_locks 00:06:25.763 ************************************ 00:06:25.763 23:05:31 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.763 23:05:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.763 23:05:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.763 23:05:31 -- common/autotest_common.sh@10 -- # set +x 00:06:25.763 ************************************ 00:06:25.763 START TEST default_locks_via_rpc 00:06:25.763 ************************************ 00:06:25.763 23:05:31 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:25.763 23:05:31 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=454874 00:06:25.763 23:05:31 -- event/cpu_locks.sh@63 -- # waitforlisten 454874 00:06:25.763 23:05:31 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.763 23:05:31 -- common/autotest_common.sh@819 -- # '[' -z 454874 ']' 00:06:25.763 23:05:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.763 23:05:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.763 23:05:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.763 23:05:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.763 23:05:31 -- common/autotest_common.sh@10 -- # set +x 00:06:25.763 [2024-11-02 23:05:31.363509] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:25.763 [2024-11-02 23:05:31.363564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454874 ] 00:06:25.763 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.763 [2024-11-02 23:05:31.431353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.763 [2024-11-02 23:05:31.503930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.763 [2024-11-02 23:05:31.504053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.701 23:05:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.701 23:05:32 -- common/autotest_common.sh@852 -- # return 0 00:06:26.701 23:05:32 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:26.701 23:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:26.701 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.701 23:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:26.701 23:05:32 -- event/cpu_locks.sh@67 -- # no_locks 00:06:26.701 23:05:32 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.701 23:05:32 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.701 23:05:32 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.701 23:05:32 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.701 23:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:26.701 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.701 23:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:26.701 23:05:32 -- event/cpu_locks.sh@71 -- # locks_exist 454874 00:06:26.701 23:05:32 -- event/cpu_locks.sh@22 -- # lslocks -p 454874 00:06:26.701 23:05:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.960 23:05:32 -- event/cpu_locks.sh@73 -- # killprocess 454874 00:06:26.960 23:05:32 -- common/autotest_common.sh@926 -- # '[' -z 454874 ']' 00:06:26.960 23:05:32 -- common/autotest_common.sh@930 -- # kill -0 454874 00:06:26.960 23:05:32 -- common/autotest_common.sh@931 -- # uname 00:06:26.960 23:05:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.960 23:05:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 454874 00:06:26.960 23:05:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.960 23:05:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.960 23:05:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 454874' 00:06:26.960 killing process with pid 454874 00:06:26.960 23:05:32 -- common/autotest_common.sh@945 -- # kill 454874 00:06:26.960 23:05:32 -- common/autotest_common.sh@950 -- # wait 454874 00:06:27.529 00:06:27.529 real 0m1.707s 00:06:27.529 user 0m1.809s 00:06:27.529 sys 0m0.575s 00:06:27.529 23:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.529 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:06:27.529 ************************************ 00:06:27.529 END TEST default_locks_via_rpc 00:06:27.529 ************************************ 00:06:27.529 23:05:33 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.529 23:05:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.529 23:05:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.529 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:06:27.529 ************************************ 00:06:27.529 START TEST non_locking_app_on_locked_coremask 00:06:27.529 ************************************ 00:06:27.529 23:05:33 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:27.529 23:05:33 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=455184 00:06:27.529 23:05:33 -- event/cpu_locks.sh@81 -- # waitforlisten 455184 /var/tmp/spdk.sock 00:06:27.529 23:05:33 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.529 23:05:33 -- common/autotest_common.sh@819 -- # '[' -z 455184 ']' 00:06:27.529 23:05:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.529 23:05:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.529 23:05:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.529 23:05:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.529 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:06:27.529 [2024-11-02 23:05:33.122501] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:27.529 [2024-11-02 23:05:33.122555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455184 ] 00:06:27.529 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.529 [2024-11-02 23:05:33.189933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.529 [2024-11-02 23:05:33.262613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.529 [2024-11-02 23:05:33.262728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.529 23:05:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.529 23:05:33 -- common/autotest_common.sh@852 -- # return 0 00:06:28.529 23:05:33 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=455407 00:06:28.529 23:05:33 -- event/cpu_locks.sh@85 -- # waitforlisten 455407 /var/tmp/spdk2.sock 00:06:28.529 23:05:33 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.529 23:05:33 -- common/autotest_common.sh@819 -- # '[' -z 455407 ']' 00:06:28.529 23:05:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.529 23:05:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.529 23:05:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.529 23:05:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.529 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:06:28.529 [2024-11-02 23:05:33.981539] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:28.529 [2024-11-02 23:05:33.981599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455407 ] 00:06:28.529 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.529 [2024-11-02 23:05:34.076959] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.529 [2024-11-02 23:05:34.076989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.529 [2024-11-02 23:05:34.218371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.529 [2024-11-02 23:05:34.218506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.100 23:05:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.100 23:05:34 -- common/autotest_common.sh@852 -- # return 0 00:06:29.100 23:05:34 -- event/cpu_locks.sh@87 -- # locks_exist 455184 00:06:29.100 23:05:34 -- event/cpu_locks.sh@22 -- # lslocks -p 455184 00:06:29.100 23:05:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.668 lslocks: write error 00:06:29.668 23:05:35 -- event/cpu_locks.sh@89 -- # killprocess 455184 00:06:29.668 23:05:35 -- common/autotest_common.sh@926 -- # '[' -z 455184 ']' 00:06:29.668 23:05:35 -- common/autotest_common.sh@930 -- # kill -0 455184 00:06:29.668 23:05:35 -- common/autotest_common.sh@931 -- # uname 00:06:29.668 23:05:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.668 23:05:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 455184 00:06:29.668 23:05:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.668 23:05:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.668 23:05:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 455184' 00:06:29.668 killing process with pid 455184 00:06:29.668 23:05:35 -- common/autotest_common.sh@945 -- # kill 455184 00:06:29.668 23:05:35 -- common/autotest_common.sh@950 -- # wait 455184 00:06:30.608 23:05:36 -- event/cpu_locks.sh@90 -- # killprocess 455407 00:06:30.608 23:05:36 -- common/autotest_common.sh@926 -- # '[' -z 455407 ']' 00:06:30.608 23:05:36 -- common/autotest_common.sh@930 -- # kill -0 455407 00:06:30.608 23:05:36 -- common/autotest_common.sh@931 -- # uname 00:06:30.608 23:05:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.608 23:05:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 455407 00:06:30.608 23:05:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.608 23:05:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.608 23:05:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 455407' 00:06:30.608 killing process with pid 455407 00:06:30.608 23:05:36 -- common/autotest_common.sh@945 -- # kill 455407 00:06:30.608 23:05:36 -- common/autotest_common.sh@950 -- # wait 455407 00:06:30.867 00:06:30.867 real 0m3.383s 00:06:30.867 user 0m3.634s 00:06:30.867 sys 0m1.069s 00:06:30.867 23:05:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.867 23:05:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.867 ************************************ 00:06:30.867 END TEST non_locking_app_on_locked_coremask 00:06:30.867 ************************************ 00:06:30.867 23:05:36 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.867 23:05:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.867 23:05:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.867 23:05:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.867 ************************************ 00:06:30.867 START TEST locking_app_on_unlocked_coremask 00:06:30.867 ************************************ 00:06:30.867 23:05:36 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:30.867 23:05:36 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=455765 00:06:30.867 23:05:36 -- event/cpu_locks.sh@99 -- # waitforlisten 455765 /var/tmp/spdk.sock 00:06:30.867 23:05:36 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.867 23:05:36 -- common/autotest_common.sh@819 -- # '[' -z 455765 ']' 00:06:30.867 23:05:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.867 23:05:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.867 23:05:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.867 23:05:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.867 23:05:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.867 [2024-11-02 23:05:36.555298] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:30.867 [2024-11-02 23:05:36.555351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455765 ] 00:06:30.867 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.127 [2024-11-02 23:05:36.625611] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.127 [2024-11-02 23:05:36.625642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.127 [2024-11-02 23:05:36.695725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.127 [2024-11-02 23:05:36.695869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.696 23:05:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.696 23:05:37 -- common/autotest_common.sh@852 -- # return 0 00:06:31.696 23:05:37 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.696 23:05:37 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=456033 00:06:31.696 23:05:37 -- event/cpu_locks.sh@103 -- # waitforlisten 456033 /var/tmp/spdk2.sock 00:06:31.696 23:05:37 -- common/autotest_common.sh@819 -- # '[' -z 456033 ']' 00:06:31.696 23:05:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.696 23:05:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.696 23:05:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.696 23:05:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.696 23:05:37 -- common/autotest_common.sh@10 -- # set +x 00:06:31.696 [2024-11-02 23:05:37.411844] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:31.696 [2024-11-02 23:05:37.411893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456033 ] 00:06:31.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.955 [2024-11-02 23:05:37.505327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.955 [2024-11-02 23:05:37.639825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.955 [2024-11-02 23:05:37.639980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.523 23:05:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.523 23:05:38 -- common/autotest_common.sh@852 -- # return 0 00:06:32.523 23:05:38 -- event/cpu_locks.sh@105 -- # locks_exist 456033 00:06:32.523 23:05:38 -- event/cpu_locks.sh@22 -- # lslocks -p 456033 00:06:32.523 23:05:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.902 lslocks: write error 00:06:33.902 23:05:39 -- event/cpu_locks.sh@107 -- # killprocess 455765 00:06:33.902 23:05:39 -- common/autotest_common.sh@926 -- # '[' -z 455765 ']' 00:06:33.902 23:05:39 -- common/autotest_common.sh@930 -- # kill -0 455765 00:06:33.902 23:05:39 -- common/autotest_common.sh@931 -- # uname 00:06:33.902 23:05:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:33.902 23:05:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 455765 00:06:33.902 23:05:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:33.902 23:05:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:33.902 23:05:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 455765' 00:06:33.902 killing process with pid 455765 00:06:33.902 23:05:39 -- common/autotest_common.sh@945 -- # kill 455765 00:06:33.902 23:05:39 -- common/autotest_common.sh@950 -- # wait 455765 00:06:34.470 23:05:40 -- event/cpu_locks.sh@108 -- # killprocess 456033 00:06:34.470 23:05:40 -- common/autotest_common.sh@926 -- # '[' -z 456033 ']' 00:06:34.470 23:05:40 -- common/autotest_common.sh@930 -- # kill -0 456033 00:06:34.470 23:05:40 -- common/autotest_common.sh@931 -- # uname 00:06:34.470 23:05:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.470 23:05:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 456033 00:06:34.470 23:05:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.470 23:05:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.471 23:05:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 456033' 00:06:34.471 killing process with pid 456033 00:06:34.471 23:05:40 -- common/autotest_common.sh@945 -- # kill 456033 00:06:34.471 23:05:40 -- common/autotest_common.sh@950 -- # wait 456033 00:06:34.730 00:06:34.730 real 0m3.931s 00:06:34.730 user 0m4.227s 00:06:34.730 sys 0m1.259s 00:06:34.730 23:05:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.730 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.730 ************************************ 00:06:34.730 END TEST locking_app_on_unlocked_coremask 00:06:34.730 ************************************ 00:06:34.730 23:05:40 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.730 23:05:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.730 23:05:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.730 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.730 ************************************ 00:06:34.730 START TEST locking_app_on_locked_coremask 00:06:34.730 ************************************ 00:06:34.730 23:05:40 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:34.730 23:05:40 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=456607 00:06:34.730 23:05:40 -- event/cpu_locks.sh@116 -- # waitforlisten 456607 /var/tmp/spdk.sock 00:06:34.730 23:05:40 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.989 23:05:40 -- common/autotest_common.sh@819 -- # '[' -z 456607 ']' 00:06:34.989 23:05:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.989 23:05:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.989 23:05:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.989 23:05:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.989 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.989 [2024-11-02 23:05:40.535741] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:34.989 [2024-11-02 23:05:40.535794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456607 ] 00:06:34.989 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.989 [2024-11-02 23:05:40.604959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.989 [2024-11-02 23:05:40.678491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.989 [2024-11-02 23:05:40.678610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.927 23:05:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.927 23:05:41 -- common/autotest_common.sh@852 -- # return 0 00:06:35.927 23:05:41 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=456665 00:06:35.927 23:05:41 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 456665 /var/tmp/spdk2.sock 00:06:35.927 23:05:41 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.927 23:05:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.927 23:05:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 456665 /var/tmp/spdk2.sock 00:06:35.927 23:05:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:35.927 23:05:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.927 23:05:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:35.927 23:05:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.927 23:05:41 -- common/autotest_common.sh@643 -- # waitforlisten 456665 /var/tmp/spdk2.sock 00:06:35.928 23:05:41 -- common/autotest_common.sh@819 -- # '[' -z 456665 ']' 00:06:35.928 23:05:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.928 23:05:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.928 23:05:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.928 23:05:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.928 23:05:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.928 [2024-11-02 23:05:41.390799] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:35.928 [2024-11-02 23:05:41.390852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456665 ] 00:06:35.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.928 [2024-11-02 23:05:41.483517] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 456607 has claimed it. 00:06:35.928 [2024-11-02 23:05:41.483555] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.497 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (456665) - No such process 00:06:36.497 ERROR: process (pid: 456665) is no longer running 00:06:36.497 23:05:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.497 23:05:42 -- common/autotest_common.sh@852 -- # return 1 00:06:36.497 23:05:42 -- common/autotest_common.sh@643 -- # es=1 00:06:36.497 23:05:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.497 23:05:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.497 23:05:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.497 23:05:42 -- event/cpu_locks.sh@122 -- # locks_exist 456607 00:06:36.497 23:05:42 -- event/cpu_locks.sh@22 -- # lslocks -p 456607 00:06:36.497 23:05:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.065 lslocks: write error 00:06:37.065 23:05:42 -- event/cpu_locks.sh@124 -- # killprocess 456607 00:06:37.065 23:05:42 -- common/autotest_common.sh@926 -- # '[' -z 456607 ']' 00:06:37.065 23:05:42 -- common/autotest_common.sh@930 -- # kill -0 456607 00:06:37.065 23:05:42 -- common/autotest_common.sh@931 -- # uname 00:06:37.065 23:05:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.065 23:05:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 456607 00:06:37.065 23:05:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.065 23:05:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.065 23:05:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 456607' 00:06:37.065 killing process with pid 456607 00:06:37.065 23:05:42 -- common/autotest_common.sh@945 -- # kill 456607 00:06:37.065 23:05:42 -- common/autotest_common.sh@950 -- # wait 456607 00:06:37.325 00:06:37.325 real 0m2.444s 00:06:37.325 user 0m2.712s 00:06:37.325 sys 0m0.709s 00:06:37.325 23:05:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.325 23:05:42 -- common/autotest_common.sh@10 -- # set +x 00:06:37.325 ************************************ 00:06:37.325 END TEST locking_app_on_locked_coremask 00:06:37.325 ************************************ 00:06:37.325 23:05:42 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.325 23:05:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.325 23:05:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.325 23:05:42 -- common/autotest_common.sh@10 -- # set +x 00:06:37.325 ************************************ 00:06:37.325 START TEST locking_overlapped_coremask 00:06:37.325 ************************************ 00:06:37.325 23:05:42 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:37.326 23:05:42 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=457016 00:06:37.326 23:05:42 -- event/cpu_locks.sh@133 -- # waitforlisten 457016 /var/tmp/spdk.sock 00:06:37.326 23:05:42 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.326 23:05:42 -- common/autotest_common.sh@819 -- # '[' -z 457016 ']' 00:06:37.326 23:05:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.326 23:05:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.326 23:05:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.326 23:05:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.326 23:05:42 -- common/autotest_common.sh@10 -- # set +x 00:06:37.326 [2024-11-02 23:05:43.027427] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:37.326 [2024-11-02 23:05:43.027480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457016 ] 00:06:37.326 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.586 [2024-11-02 23:05:43.096162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.586 [2024-11-02 23:05:43.170305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.586 [2024-11-02 23:05:43.170443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.586 [2024-11-02 23:05:43.170535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.586 [2024-11-02 23:05:43.170538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.155 23:05:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.155 23:05:43 -- common/autotest_common.sh@852 -- # return 0 00:06:38.155 23:05:43 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=457188 00:06:38.155 23:05:43 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 457188 /var/tmp/spdk2.sock 00:06:38.155 23:05:43 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.155 23:05:43 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.155 23:05:43 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 457188 /var/tmp/spdk2.sock 00:06:38.155 23:05:43 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:38.155 23:05:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.155 23:05:43 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:38.155 23:05:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.155 23:05:43 -- common/autotest_common.sh@643 -- # waitforlisten 457188 /var/tmp/spdk2.sock 00:06:38.155 23:05:43 -- common/autotest_common.sh@819 -- # '[' -z 457188 ']' 00:06:38.155 23:05:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.155 23:05:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.155 23:05:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.155 23:05:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.155 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:06:38.155 [2024-11-02 23:05:43.891521] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.155 [2024-11-02 23:05:43.891569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457188 ] 00:06:38.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.415 [2024-11-02 23:05:43.989739] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 457016 has claimed it. 00:06:38.415 [2024-11-02 23:05:43.989778] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.984 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (457188) - No such process 00:06:38.984 ERROR: process (pid: 457188) is no longer running 00:06:38.984 23:05:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.984 23:05:44 -- common/autotest_common.sh@852 -- # return 1 00:06:38.984 23:05:44 -- common/autotest_common.sh@643 -- # es=1 00:06:38.984 23:05:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.984 23:05:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.984 23:05:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.984 23:05:44 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.984 23:05:44 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.984 23:05:44 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.984 23:05:44 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.984 23:05:44 -- event/cpu_locks.sh@141 -- # killprocess 457016 00:06:38.984 23:05:44 -- common/autotest_common.sh@926 -- # '[' -z 457016 ']' 00:06:38.984 23:05:44 -- common/autotest_common.sh@930 -- # kill -0 457016 00:06:38.984 23:05:44 -- common/autotest_common.sh@931 -- # uname 00:06:38.984 23:05:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.984 23:05:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 457016 00:06:38.984 23:05:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.984 23:05:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.984 23:05:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 457016' 00:06:38.984 killing process with pid 457016 00:06:38.984 23:05:44 -- common/autotest_common.sh@945 -- # kill 457016 00:06:38.984 23:05:44 -- common/autotest_common.sh@950 -- # wait 457016 00:06:39.243 00:06:39.243 real 0m1.944s 00:06:39.243 user 0m5.442s 00:06:39.243 sys 0m0.458s 00:06:39.243 23:05:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.243 23:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.243 ************************************ 00:06:39.243 END TEST locking_overlapped_coremask 00:06:39.243 ************************************ 00:06:39.243 23:05:44 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.243 23:05:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.243 23:05:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.243 23:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.243 ************************************ 00:06:39.244 START TEST locking_overlapped_coremask_via_rpc 00:06:39.244 ************************************ 00:06:39.244 23:05:44 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:39.244 23:05:44 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=457480 00:06:39.244 23:05:44 -- event/cpu_locks.sh@149 -- # waitforlisten 457480 /var/tmp/spdk.sock 00:06:39.244 23:05:44 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.244 23:05:44 -- common/autotest_common.sh@819 -- # '[' -z 457480 ']' 00:06:39.244 23:05:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.244 23:05:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.244 23:05:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.244 23:05:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.244 23:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.503 [2024-11-02 23:05:45.024450] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:39.503 [2024-11-02 23:05:45.024504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457480 ] 00:06:39.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.503 [2024-11-02 23:05:45.092722] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.503 [2024-11-02 23:05:45.092753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.503 [2024-11-02 23:05:45.155786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.503 [2024-11-02 23:05:45.155943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.503 [2024-11-02 23:05:45.156066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.503 [2024-11-02 23:05:45.156068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.071 23:05:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.071 23:05:45 -- common/autotest_common.sh@852 -- # return 0 00:06:40.071 23:05:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=457503 00:06:40.071 23:05:45 -- event/cpu_locks.sh@153 -- # waitforlisten 457503 /var/tmp/spdk2.sock 00:06:40.071 23:05:45 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.071 23:05:45 -- common/autotest_common.sh@819 -- # '[' -z 457503 ']' 00:06:40.071 23:05:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.071 23:05:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.071 23:05:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.071 23:05:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.071 23:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:40.331 [2024-11-02 23:05:45.873953] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:40.331 [2024-11-02 23:05:45.874011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457503 ] 00:06:40.331 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.331 [2024-11-02 23:05:45.975073] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.331 [2024-11-02 23:05:45.975105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.590 [2024-11-02 23:05:46.111567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.590 [2024-11-02 23:05:46.111747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.590 [2024-11-02 23:05:46.111880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.590 [2024-11-02 23:05:46.111882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.159 23:05:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.159 23:05:46 -- common/autotest_common.sh@852 -- # return 0 00:06:41.159 23:05:46 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.159 23:05:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.159 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:41.159 23:05:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.159 23:05:46 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.159 23:05:46 -- common/autotest_common.sh@640 -- # local es=0 00:06:41.159 23:05:46 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.159 23:05:46 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:41.159 23:05:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.159 23:05:46 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:41.159 23:05:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.159 23:05:46 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.159 23:05:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.159 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:41.159 [2024-11-02 23:05:46.716037] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 457480 has claimed it. 00:06:41.159 request: 00:06:41.159 { 00:06:41.159 "method": "framework_enable_cpumask_locks", 00:06:41.159 "req_id": 1 00:06:41.159 } 00:06:41.159 Got JSON-RPC error response 00:06:41.159 response: 00:06:41.159 { 00:06:41.159 "code": -32603, 00:06:41.159 "message": "Failed to claim CPU core: 2" 00:06:41.159 } 00:06:41.159 23:05:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:41.159 23:05:46 -- common/autotest_common.sh@643 -- # es=1 00:06:41.159 23:05:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.159 23:05:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.159 23:05:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.159 23:05:46 -- event/cpu_locks.sh@158 -- # waitforlisten 457480 /var/tmp/spdk.sock 00:06:41.159 23:05:46 -- common/autotest_common.sh@819 -- # '[' -z 457480 ']' 00:06:41.159 23:05:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.159 23:05:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.159 23:05:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.159 23:05:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.159 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:41.419 23:05:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.419 23:05:46 -- common/autotest_common.sh@852 -- # return 0 00:06:41.419 23:05:46 -- event/cpu_locks.sh@159 -- # waitforlisten 457503 /var/tmp/spdk2.sock 00:06:41.419 23:05:46 -- common/autotest_common.sh@819 -- # '[' -z 457503 ']' 00:06:41.419 23:05:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.419 23:05:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.419 23:05:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.419 23:05:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.419 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:41.419 23:05:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.419 23:05:47 -- common/autotest_common.sh@852 -- # return 0 00:06:41.419 23:05:47 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.419 23:05:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.419 23:05:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.419 23:05:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.419 00:06:41.419 real 0m2.145s 00:06:41.419 user 0m0.861s 00:06:41.419 sys 0m0.205s 00:06:41.419 23:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.419 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.419 ************************************ 00:06:41.419 END TEST locking_overlapped_coremask_via_rpc 00:06:41.419 ************************************ 00:06:41.419 23:05:47 -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.419 23:05:47 -- event/cpu_locks.sh@15 -- # [[ -z 457480 ]] 00:06:41.419 23:05:47 -- event/cpu_locks.sh@15 -- # killprocess 457480 00:06:41.419 23:05:47 -- common/autotest_common.sh@926 -- # '[' -z 457480 ']' 00:06:41.419 23:05:47 -- common/autotest_common.sh@930 -- # kill -0 457480 00:06:41.419 23:05:47 -- common/autotest_common.sh@931 -- # uname 00:06:41.419 23:05:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.419 23:05:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 457480 00:06:41.678 23:05:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.678 23:05:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.678 23:05:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 457480' 00:06:41.678 killing process with pid 457480 00:06:41.678 23:05:47 -- common/autotest_common.sh@945 -- # kill 457480 00:06:41.678 23:05:47 -- common/autotest_common.sh@950 -- # wait 457480 00:06:41.938 23:05:47 -- event/cpu_locks.sh@16 -- # [[ -z 457503 ]] 00:06:41.938 23:05:47 -- event/cpu_locks.sh@16 -- # killprocess 457503 00:06:41.938 23:05:47 -- common/autotest_common.sh@926 -- # '[' -z 457503 ']' 00:06:41.938 23:05:47 -- common/autotest_common.sh@930 -- # kill -0 457503 00:06:41.938 23:05:47 -- common/autotest_common.sh@931 -- # uname 00:06:41.938 23:05:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.938 23:05:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 457503 00:06:41.938 23:05:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:41.938 23:05:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:41.938 23:05:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 457503' 00:06:41.938 killing process with pid 457503 00:06:41.938 23:05:47 -- common/autotest_common.sh@945 -- # kill 457503 00:06:41.938 23:05:47 -- common/autotest_common.sh@950 -- # wait 457503 00:06:42.506 23:05:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.506 23:05:47 -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.506 23:05:47 -- event/cpu_locks.sh@15 -- # [[ -z 457480 ]] 00:06:42.506 23:05:47 -- event/cpu_locks.sh@15 -- # killprocess 457480 00:06:42.506 23:05:47 -- common/autotest_common.sh@926 -- # '[' -z 457480 ']' 00:06:42.506 23:05:47 -- common/autotest_common.sh@930 -- # kill -0 457480 00:06:42.506 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (457480) - No such process 00:06:42.506 23:05:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 457480 is not found' 00:06:42.506 Process with pid 457480 is not found 00:06:42.506 23:05:47 -- event/cpu_locks.sh@16 -- # [[ -z 457503 ]] 00:06:42.506 23:05:47 -- event/cpu_locks.sh@16 -- # killprocess 457503 00:06:42.506 23:05:47 -- common/autotest_common.sh@926 -- # '[' -z 457503 ']' 00:06:42.506 23:05:47 -- common/autotest_common.sh@930 -- # kill -0 457503 00:06:42.506 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (457503) - No such process 00:06:42.506 23:05:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 457503 is not found' 00:06:42.506 Process with pid 457503 is not found 00:06:42.506 23:05:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.506 00:06:42.506 real 0m18.563s 00:06:42.506 user 0m31.488s 00:06:42.506 sys 0m5.825s 00:06:42.506 23:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.506 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:42.506 ************************************ 00:06:42.506 END TEST cpu_locks 00:06:42.506 ************************************ 00:06:42.506 00:06:42.506 real 0m44.314s 00:06:42.506 user 1m23.884s 00:06:42.506 sys 0m9.694s 00:06:42.506 23:05:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.506 23:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:42.506 ************************************ 00:06:42.506 END TEST event 00:06:42.506 ************************************ 00:06:42.506 23:05:48 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:42.506 23:05:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.506 23:05:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.506 23:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:42.506 ************************************ 00:06:42.506 START TEST thread 00:06:42.506 ************************************ 00:06:42.506 23:05:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:42.506 * Looking for test storage... 00:06:42.506 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:42.506 23:05:48 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.506 23:05:48 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.506 23:05:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.506 23:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:42.506 ************************************ 00:06:42.506 START TEST thread_poller_perf 00:06:42.506 ************************************ 00:06:42.506 23:05:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.506 [2024-11-02 23:05:48.193340] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.506 [2024-11-02 23:05:48.193426] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458124 ] 00:06:42.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.765 [2024-11-02 23:05:48.266549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.765 [2024-11-02 23:05:48.334428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.765 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.703 [2024-11-02T22:05:49.460Z] ====================================== 00:06:43.703 [2024-11-02T22:05:49.460Z] busy:2510852498 (cyc) 00:06:43.703 [2024-11-02T22:05:49.460Z] total_run_count: 412000 00:06:43.703 [2024-11-02T22:05:49.460Z] tsc_hz: 2500000000 (cyc) 00:06:43.703 [2024-11-02T22:05:49.460Z] ====================================== 00:06:43.703 [2024-11-02T22:05:49.460Z] poller_cost: 6094 (cyc), 2437 (nsec) 00:06:43.703 00:06:43.703 real 0m1.251s 00:06:43.703 user 0m1.161s 00:06:43.703 sys 0m0.086s 00:06:43.703 23:05:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.703 23:05:49 -- common/autotest_common.sh@10 -- # set +x 00:06:43.703 ************************************ 00:06:43.703 END TEST thread_poller_perf 00:06:43.703 ************************************ 00:06:43.962 23:05:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.962 23:05:49 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:43.962 23:05:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.962 23:05:49 -- common/autotest_common.sh@10 -- # set +x 00:06:43.962 ************************************ 00:06:43.962 START TEST thread_poller_perf 00:06:43.962 ************************************ 00:06:43.962 23:05:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.962 [2024-11-02 23:05:49.490983] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.962 [2024-11-02 23:05:49.491073] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458399 ] 00:06:43.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.962 [2024-11-02 23:05:49.562555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.962 [2024-11-02 23:05:49.627019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.962 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.341 [2024-11-02T22:05:51.098Z] ====================================== 00:06:45.341 [2024-11-02T22:05:51.098Z] busy:2502379200 (cyc) 00:06:45.341 [2024-11-02T22:05:51.098Z] total_run_count: 5470000 00:06:45.341 [2024-11-02T22:05:51.098Z] tsc_hz: 2500000000 (cyc) 00:06:45.341 [2024-11-02T22:05:51.098Z] ====================================== 00:06:45.342 [2024-11-02T22:05:51.099Z] poller_cost: 457 (cyc), 182 (nsec) 00:06:45.342 00:06:45.342 real 0m1.244s 00:06:45.342 user 0m1.157s 00:06:45.342 sys 0m0.083s 00:06:45.342 23:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.342 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.342 ************************************ 00:06:45.342 END TEST thread_poller_perf 00:06:45.342 ************************************ 00:06:45.342 23:05:50 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.342 00:06:45.342 real 0m2.686s 00:06:45.342 user 0m2.380s 00:06:45.342 sys 0m0.324s 00:06:45.342 23:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.342 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.342 ************************************ 00:06:45.342 END TEST thread 00:06:45.342 ************************************ 00:06:45.342 23:05:50 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:45.342 23:05:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.342 23:05:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.342 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.342 ************************************ 00:06:45.342 START TEST accel 00:06:45.342 ************************************ 00:06:45.342 23:05:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:45.342 * Looking for test storage... 00:06:45.342 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:45.342 23:05:50 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:45.342 23:05:50 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:45.342 23:05:50 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.342 23:05:50 -- accel/accel.sh@59 -- # spdk_tgt_pid=458666 00:06:45.342 23:05:50 -- accel/accel.sh@60 -- # waitforlisten 458666 00:06:45.342 23:05:50 -- common/autotest_common.sh@819 -- # '[' -z 458666 ']' 00:06:45.342 23:05:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.342 23:05:50 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:45.342 23:05:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.342 23:05:50 -- accel/accel.sh@58 -- # build_accel_config 00:06:45.342 23:05:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.342 23:05:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.342 23:05:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.342 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.342 23:05:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.342 23:05:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.342 23:05:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.342 23:05:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.342 23:05:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.342 23:05:50 -- accel/accel.sh@42 -- # jq -r . 00:06:45.342 [2024-11-02 23:05:50.947668] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.342 [2024-11-02 23:05:50.947725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458666 ] 00:06:45.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.342 [2024-11-02 23:05:51.016841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.342 [2024-11-02 23:05:51.090284] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.342 [2024-11-02 23:05:51.090415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.280 23:05:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.280 23:05:51 -- common/autotest_common.sh@852 -- # return 0 00:06:46.280 23:05:51 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:46.280 23:05:51 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:46.280 23:05:51 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:46.280 23:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.280 23:05:51 -- common/autotest_common.sh@10 -- # set +x 00:06:46.280 23:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # IFS== 00:06:46.280 23:05:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.280 23:05:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.280 23:05:51 -- accel/accel.sh@67 -- # killprocess 458666 00:06:46.280 23:05:51 -- common/autotest_common.sh@926 -- # '[' -z 458666 ']' 00:06:46.280 23:05:51 -- common/autotest_common.sh@930 -- # kill -0 458666 00:06:46.280 23:05:51 -- common/autotest_common.sh@931 -- # uname 00:06:46.280 23:05:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.280 23:05:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 458666 00:06:46.280 23:05:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:46.280 23:05:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:46.280 23:05:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 458666' 00:06:46.280 killing process with pid 458666 00:06:46.280 23:05:51 -- common/autotest_common.sh@945 -- # kill 458666 00:06:46.280 23:05:51 -- common/autotest_common.sh@950 -- # wait 458666 00:06:46.540 23:05:52 -- accel/accel.sh@68 -- # trap - ERR 00:06:46.540 23:05:52 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:46.540 23:05:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:46.540 23:05:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.540 23:05:52 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 23:05:52 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:46.540 23:05:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:46.540 23:05:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.540 23:05:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.540 23:05:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.540 23:05:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.540 23:05:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.540 23:05:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.540 23:05:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.540 23:05:52 -- accel/accel.sh@42 -- # jq -r . 00:06:46.540 23:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.540 23:05:52 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 23:05:52 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:46.540 23:05:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.540 23:05:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.540 23:05:52 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 ************************************ 00:06:46.540 START TEST accel_missing_filename 00:06:46.540 ************************************ 00:06:46.540 23:05:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:46.540 23:05:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.540 23:05:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:46.540 23:05:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.540 23:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.540 23:05:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.540 23:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.540 23:05:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:46.540 23:05:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:46.540 23:05:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.540 23:05:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.540 23:05:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.540 23:05:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.540 23:05:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.540 23:05:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.540 23:05:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.540 23:05:52 -- accel/accel.sh@42 -- # jq -r . 00:06:46.799 [2024-11-02 23:05:52.310077] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.799 [2024-11-02 23:05:52.310142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458904 ] 00:06:46.799 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.799 [2024-11-02 23:05:52.380970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.799 [2024-11-02 23:05:52.448655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.799 [2024-11-02 23:05:52.489765] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.799 [2024-11-02 23:05:52.550088] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:47.059 A filename is required. 00:06:47.059 23:05:52 -- common/autotest_common.sh@643 -- # es=234 00:06:47.059 23:05:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.059 23:05:52 -- common/autotest_common.sh@652 -- # es=106 00:06:47.059 23:05:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:47.059 23:05:52 -- common/autotest_common.sh@660 -- # es=1 00:06:47.059 23:05:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.059 00:06:47.059 real 0m0.359s 00:06:47.059 user 0m0.266s 00:06:47.059 sys 0m0.129s 00:06:47.059 23:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.059 23:05:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.059 ************************************ 00:06:47.059 END TEST accel_missing_filename 00:06:47.059 ************************************ 00:06:47.059 23:05:52 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:47.059 23:05:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:47.059 23:05:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.059 23:05:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.059 ************************************ 00:06:47.059 START TEST accel_compress_verify 00:06:47.059 ************************************ 00:06:47.059 23:05:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:47.059 23:05:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.059 23:05:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:47.059 23:05:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.059 23:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.059 23:05:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.059 23:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.059 23:05:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:47.059 23:05:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:47.059 23:05:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.059 23:05:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.059 23:05:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.059 23:05:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.059 23:05:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.059 23:05:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.059 23:05:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.059 23:05:52 -- accel/accel.sh@42 -- # jq -r . 00:06:47.059 [2024-11-02 23:05:52.717503] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:47.059 [2024-11-02 23:05:52.717575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459065 ] 00:06:47.059 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.059 [2024-11-02 23:05:52.789828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.318 [2024-11-02 23:05:52.854002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.318 [2024-11-02 23:05:52.894626] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.318 [2024-11-02 23:05:52.954480] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:47.318 00:06:47.318 Compression does not support the verify option, aborting. 00:06:47.318 23:05:53 -- common/autotest_common.sh@643 -- # es=161 00:06:47.318 23:05:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.319 23:05:53 -- common/autotest_common.sh@652 -- # es=33 00:06:47.319 23:05:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:47.319 23:05:53 -- common/autotest_common.sh@660 -- # es=1 00:06:47.319 23:05:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.319 00:06:47.319 real 0m0.356s 00:06:47.319 user 0m0.261s 00:06:47.319 sys 0m0.132s 00:06:47.319 23:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.319 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.319 ************************************ 00:06:47.319 END TEST accel_compress_verify 00:06:47.319 ************************************ 00:06:47.578 23:05:53 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:47.578 23:05:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:47.578 23:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.578 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.578 ************************************ 00:06:47.578 START TEST accel_wrong_workload 00:06:47.578 ************************************ 00:06:47.578 23:05:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:47.578 23:05:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.578 23:05:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:47.578 23:05:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.578 23:05:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.578 23:05:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.578 23:05:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.578 23:05:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:47.578 23:05:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:47.578 23:05:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.578 23:05:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.578 23:05:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.578 23:05:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.578 23:05:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.578 23:05:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.578 23:05:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.578 23:05:53 -- accel/accel.sh@42 -- # jq -r . 00:06:47.578 Unsupported workload type: foobar 00:06:47.578 [2024-11-02 23:05:53.118412] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:47.579 accel_perf options: 00:06:47.579 [-h help message] 00:06:47.579 [-q queue depth per core] 00:06:47.579 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.579 [-T number of threads per core 00:06:47.579 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.579 [-t time in seconds] 00:06:47.579 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.579 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.579 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.579 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.579 [-S for crc32c workload, use this seed value (default 0) 00:06:47.579 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.579 [-f for fill workload, use this BYTE value (default 255) 00:06:47.579 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.579 [-y verify result if this switch is on] 00:06:47.579 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.579 Can be used to spread operations across a wider range of memory. 00:06:47.579 23:05:53 -- common/autotest_common.sh@643 -- # es=1 00:06:47.579 23:05:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.579 23:05:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.579 23:05:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.579 00:06:47.579 real 0m0.037s 00:06:47.579 user 0m0.021s 00:06:47.579 sys 0m0.016s 00:06:47.579 23:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.579 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.579 ************************************ 00:06:47.579 END TEST accel_wrong_workload 00:06:47.579 ************************************ 00:06:47.579 Error: writing output failed: Broken pipe 00:06:47.579 23:05:53 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.579 23:05:53 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:47.579 23:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.579 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.579 ************************************ 00:06:47.579 START TEST accel_negative_buffers 00:06:47.579 ************************************ 00:06:47.579 23:05:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.579 23:05:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.579 23:05:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:47.579 23:05:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.579 23:05:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.579 23:05:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.579 23:05:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.579 23:05:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:47.579 23:05:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:47.579 23:05:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.579 23:05:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.579 23:05:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.579 23:05:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.579 23:05:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.579 23:05:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.579 23:05:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.579 23:05:53 -- accel/accel.sh@42 -- # jq -r . 00:06:47.579 -x option must be non-negative. 00:06:47.579 [2024-11-02 23:05:53.198999] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:47.579 accel_perf options: 00:06:47.579 [-h help message] 00:06:47.579 [-q queue depth per core] 00:06:47.579 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.579 [-T number of threads per core 00:06:47.579 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.579 [-t time in seconds] 00:06:47.579 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.579 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.579 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.579 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.579 [-S for crc32c workload, use this seed value (default 0) 00:06:47.579 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.579 [-f for fill workload, use this BYTE value (default 255) 00:06:47.579 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.579 [-y verify result if this switch is on] 00:06:47.579 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.579 Can be used to spread operations across a wider range of memory. 00:06:47.579 23:05:53 -- common/autotest_common.sh@643 -- # es=1 00:06:47.579 23:05:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.579 23:05:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.579 23:05:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.579 00:06:47.579 real 0m0.036s 00:06:47.579 user 0m0.022s 00:06:47.579 sys 0m0.014s 00:06:47.579 23:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.579 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.579 ************************************ 00:06:47.579 END TEST accel_negative_buffers 00:06:47.579 ************************************ 00:06:47.579 Error: writing output failed: Broken pipe 00:06:47.579 23:05:53 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:47.579 23:05:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:47.579 23:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.579 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.579 ************************************ 00:06:47.579 START TEST accel_crc32c 00:06:47.579 ************************************ 00:06:47.579 23:05:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:47.579 23:05:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.579 23:05:53 -- accel/accel.sh@17 -- # local accel_module 00:06:47.579 23:05:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.579 23:05:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.579 23:05:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.579 23:05:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.579 23:05:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.579 23:05:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.579 23:05:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.579 23:05:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.579 23:05:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.579 23:05:53 -- accel/accel.sh@42 -- # jq -r . 00:06:47.579 [2024-11-02 23:05:53.281221] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:47.579 [2024-11-02 23:05:53.281279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459126 ] 00:06:47.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.839 [2024-11-02 23:05:53.350872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.839 [2024-11-02 23:05:53.421874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.216 23:05:54 -- accel/accel.sh@18 -- # out=' 00:06:49.216 SPDK Configuration: 00:06:49.216 Core mask: 0x1 00:06:49.216 00:06:49.216 Accel Perf Configuration: 00:06:49.216 Workload Type: crc32c 00:06:49.216 CRC-32C seed: 32 00:06:49.216 Transfer size: 4096 bytes 00:06:49.216 Vector count 1 00:06:49.216 Module: software 00:06:49.216 Queue depth: 32 00:06:49.216 Allocate depth: 32 00:06:49.216 # threads/core: 1 00:06:49.216 Run time: 1 seconds 00:06:49.216 Verify: Yes 00:06:49.216 00:06:49.216 Running for 1 seconds... 00:06:49.216 00:06:49.216 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.216 ------------------------------------------------------------------------------------ 00:06:49.216 0,0 596384/s 2329 MiB/s 0 0 00:06:49.216 ==================================================================================== 00:06:49.216 Total 596384/s 2329 MiB/s 0 0' 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:49.216 23:05:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.216 23:05:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.216 23:05:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.216 23:05:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.216 23:05:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.216 23:05:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.216 23:05:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.216 23:05:54 -- accel/accel.sh@42 -- # jq -r . 00:06:49.216 [2024-11-02 23:05:54.626884] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.216 [2024-11-02 23:05:54.626937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459399 ] 00:06:49.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.216 [2024-11-02 23:05:54.694158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.216 [2024-11-02 23:05:54.757822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val= 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val= 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val=0x1 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val= 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val= 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val=crc32c 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val=32 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val= 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val=software 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val=32 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val=32 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val=1 00:06:49.216 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.216 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.216 23:05:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.217 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.217 23:05:54 -- accel/accel.sh@21 -- # val=Yes 00:06:49.217 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.217 23:05:54 -- accel/accel.sh@21 -- # val= 00:06:49.217 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.217 23:05:54 -- accel/accel.sh@21 -- # val= 00:06:49.217 23:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.217 23:05:54 -- accel/accel.sh@20 -- # read -r var val 00:06:50.593 23:05:55 -- accel/accel.sh@21 -- # val= 00:06:50.593 23:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.593 23:05:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.593 23:05:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.594 23:05:55 -- accel/accel.sh@21 -- # val= 00:06:50.594 23:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.594 23:05:55 -- accel/accel.sh@21 -- # val= 00:06:50.594 23:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.594 23:05:55 -- accel/accel.sh@21 -- # val= 00:06:50.594 23:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.594 23:05:55 -- accel/accel.sh@21 -- # val= 00:06:50.594 23:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.594 23:05:55 -- accel/accel.sh@21 -- # val= 00:06:50.594 23:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.594 23:05:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.594 23:05:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.594 23:05:55 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:50.594 23:05:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.594 00:06:50.594 real 0m2.698s 00:06:50.594 user 0m2.440s 00:06:50.594 sys 0m0.258s 00:06:50.594 23:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.594 23:05:55 -- common/autotest_common.sh@10 -- # set +x 00:06:50.594 ************************************ 00:06:50.594 END TEST accel_crc32c 00:06:50.594 ************************************ 00:06:50.594 23:05:55 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:50.594 23:05:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:50.594 23:05:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.594 23:05:55 -- common/autotest_common.sh@10 -- # set +x 00:06:50.594 ************************************ 00:06:50.594 START TEST accel_crc32c_C2 00:06:50.594 ************************************ 00:06:50.594 23:05:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:50.594 23:05:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.594 23:05:55 -- accel/accel.sh@17 -- # local accel_module 00:06:50.594 23:05:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.594 23:05:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.594 23:05:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.594 23:05:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.594 23:05:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.594 23:05:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.594 23:05:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.594 23:05:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.594 23:05:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.594 23:05:55 -- accel/accel.sh@42 -- # jq -r . 00:06:50.594 [2024-11-02 23:05:56.013031] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:50.594 [2024-11-02 23:05:56.013106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459682 ] 00:06:50.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.594 [2024-11-02 23:05:56.082831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.594 [2024-11-02 23:05:56.148171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.972 23:05:57 -- accel/accel.sh@18 -- # out=' 00:06:51.972 SPDK Configuration: 00:06:51.972 Core mask: 0x1 00:06:51.972 00:06:51.972 Accel Perf Configuration: 00:06:51.972 Workload Type: crc32c 00:06:51.972 CRC-32C seed: 0 00:06:51.972 Transfer size: 4096 bytes 00:06:51.972 Vector count 2 00:06:51.972 Module: software 00:06:51.972 Queue depth: 32 00:06:51.972 Allocate depth: 32 00:06:51.972 # threads/core: 1 00:06:51.972 Run time: 1 seconds 00:06:51.972 Verify: Yes 00:06:51.972 00:06:51.972 Running for 1 seconds... 00:06:51.972 00:06:51.972 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.972 ------------------------------------------------------------------------------------ 00:06:51.972 0,0 473632/s 3700 MiB/s 0 0 00:06:51.972 ==================================================================================== 00:06:51.972 Total 473632/s 1850 MiB/s 0 0' 00:06:51.972 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.972 23:05:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:51.972 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.972 23:05:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:51.972 23:05:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.972 23:05:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.972 23:05:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.972 23:05:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.972 23:05:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.972 23:05:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.972 23:05:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.972 23:05:57 -- accel/accel.sh@42 -- # jq -r . 00:06:51.972 [2024-11-02 23:05:57.370216] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.972 [2024-11-02 23:05:57.370306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459949 ] 00:06:51.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.972 [2024-11-02 23:05:57.439535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.972 [2024-11-02 23:05:57.503934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.972 23:05:57 -- accel/accel.sh@21 -- # val= 00:06:51.972 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.972 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.972 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.972 23:05:57 -- accel/accel.sh@21 -- # val= 00:06:51.972 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.972 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.972 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.972 23:05:57 -- accel/accel.sh@21 -- # val=0x1 00:06:51.972 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.972 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val= 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val= 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val=crc32c 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val=0 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val= 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val=software 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val=32 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val=32 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val=1 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val=Yes 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val= 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:51.973 23:05:57 -- accel/accel.sh@21 -- # val= 00:06:51.973 23:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:51.973 23:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 23:05:58 -- accel/accel.sh@21 -- # val= 00:06:53.362 23:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 23:05:58 -- accel/accel.sh@21 -- # val= 00:06:53.362 23:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 23:05:58 -- accel/accel.sh@21 -- # val= 00:06:53.362 23:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 23:05:58 -- accel/accel.sh@21 -- # val= 00:06:53.362 23:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 23:05:58 -- accel/accel.sh@21 -- # val= 00:06:53.362 23:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 23:05:58 -- accel/accel.sh@21 -- # val= 00:06:53.362 23:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 23:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 23:05:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.362 23:05:58 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:53.362 23:05:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.362 00:06:53.362 real 0m2.711s 00:06:53.362 user 0m2.455s 00:06:53.362 sys 0m0.255s 00:06:53.362 23:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.362 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:53.362 ************************************ 00:06:53.362 END TEST accel_crc32c_C2 00:06:53.362 ************************************ 00:06:53.362 23:05:58 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:53.362 23:05:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:53.362 23:05:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.362 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:53.362 ************************************ 00:06:53.362 START TEST accel_copy 00:06:53.362 ************************************ 00:06:53.362 23:05:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:53.362 23:05:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.362 23:05:58 -- accel/accel.sh@17 -- # local accel_module 00:06:53.362 23:05:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:53.362 23:05:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:53.362 23:05:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.362 23:05:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.362 23:05:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.362 23:05:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.362 23:05:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.362 23:05:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.362 23:05:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.362 23:05:58 -- accel/accel.sh@42 -- # jq -r . 00:06:53.362 [2024-11-02 23:05:58.764691] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:53.362 [2024-11-02 23:05:58.764776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460229 ] 00:06:53.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.362 [2024-11-02 23:05:58.833932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.362 [2024-11-02 23:05:58.899597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.812 23:06:00 -- accel/accel.sh@18 -- # out=' 00:06:54.812 SPDK Configuration: 00:06:54.812 Core mask: 0x1 00:06:54.812 00:06:54.812 Accel Perf Configuration: 00:06:54.812 Workload Type: copy 00:06:54.812 Transfer size: 4096 bytes 00:06:54.812 Vector count 1 00:06:54.812 Module: software 00:06:54.812 Queue depth: 32 00:06:54.812 Allocate depth: 32 00:06:54.812 # threads/core: 1 00:06:54.812 Run time: 1 seconds 00:06:54.812 Verify: Yes 00:06:54.812 00:06:54.812 Running for 1 seconds... 00:06:54.812 00:06:54.812 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.812 ------------------------------------------------------------------------------------ 00:06:54.812 0,0 444224/s 1735 MiB/s 0 0 00:06:54.812 ==================================================================================== 00:06:54.812 Total 444224/s 1735 MiB/s 0 0' 00:06:54.812 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.812 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.812 23:06:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:54.813 23:06:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:54.813 23:06:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.813 23:06:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.813 23:06:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.813 23:06:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.813 23:06:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.813 23:06:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.813 23:06:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.813 23:06:00 -- accel/accel.sh@42 -- # jq -r . 00:06:54.813 [2024-11-02 23:06:00.122408] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:54.813 [2024-11-02 23:06:00.122477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460450 ] 00:06:54.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.813 [2024-11-02 23:06:00.192500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.813 [2024-11-02 23:06:00.260237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val= 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val= 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val=0x1 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val= 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val= 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val=copy 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val= 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val=software 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val=32 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val=32 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val=1 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val=Yes 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val= 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:54.813 23:06:00 -- accel/accel.sh@21 -- # val= 00:06:54.813 23:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:54.813 23:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:55.776 23:06:01 -- accel/accel.sh@21 -- # val= 00:06:55.776 23:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.776 23:06:01 -- accel/accel.sh@21 -- # val= 00:06:55.776 23:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.776 23:06:01 -- accel/accel.sh@21 -- # val= 00:06:55.776 23:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.776 23:06:01 -- accel/accel.sh@21 -- # val= 00:06:55.776 23:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.776 23:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.776 23:06:01 -- accel/accel.sh@21 -- # val= 00:06:55.776 23:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.777 23:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.777 23:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.777 23:06:01 -- accel/accel.sh@21 -- # val= 00:06:55.777 23:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.777 23:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.777 23:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.777 23:06:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.777 23:06:01 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:55.777 23:06:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.777 00:06:55.777 real 0m2.720s 00:06:55.777 user 0m2.459s 00:06:55.777 sys 0m0.260s 00:06:55.777 23:06:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.777 23:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:55.777 ************************************ 00:06:55.777 END TEST accel_copy 00:06:55.777 ************************************ 00:06:55.777 23:06:01 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.777 23:06:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:55.777 23:06:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.777 23:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:55.777 ************************************ 00:06:55.777 START TEST accel_fill 00:06:55.777 ************************************ 00:06:55.777 23:06:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.777 23:06:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.777 23:06:01 -- accel/accel.sh@17 -- # local accel_module 00:06:55.777 23:06:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.777 23:06:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.777 23:06:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.777 23:06:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.777 23:06:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.777 23:06:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.777 23:06:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.777 23:06:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.777 23:06:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.777 23:06:01 -- accel/accel.sh@42 -- # jq -r . 00:06:55.777 [2024-11-02 23:06:01.524348] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.777 [2024-11-02 23:06:01.524429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460764 ] 00:06:56.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.036 [2024-11-02 23:06:01.593919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.036 [2024-11-02 23:06:01.664619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.415 23:06:02 -- accel/accel.sh@18 -- # out=' 00:06:57.415 SPDK Configuration: 00:06:57.415 Core mask: 0x1 00:06:57.415 00:06:57.415 Accel Perf Configuration: 00:06:57.415 Workload Type: fill 00:06:57.415 Fill pattern: 0x80 00:06:57.415 Transfer size: 4096 bytes 00:06:57.415 Vector count 1 00:06:57.415 Module: software 00:06:57.415 Queue depth: 64 00:06:57.415 Allocate depth: 64 00:06:57.415 # threads/core: 1 00:06:57.415 Run time: 1 seconds 00:06:57.415 Verify: Yes 00:06:57.415 00:06:57.415 Running for 1 seconds... 00:06:57.415 00:06:57.415 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.415 ------------------------------------------------------------------------------------ 00:06:57.415 0,0 675200/s 2637 MiB/s 0 0 00:06:57.415 ==================================================================================== 00:06:57.415 Total 675200/s 2637 MiB/s 0 0' 00:06:57.415 23:06:02 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.415 23:06:02 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.415 23:06:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.415 23:06:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.415 23:06:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.415 23:06:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.415 23:06:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.415 23:06:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.415 23:06:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.415 23:06:02 -- accel/accel.sh@42 -- # jq -r . 00:06:57.415 [2024-11-02 23:06:02.871149] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:57.415 [2024-11-02 23:06:02.871201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460966 ] 00:06:57.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.415 [2024-11-02 23:06:02.939733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.415 [2024-11-02 23:06:03.005583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val= 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val= 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=0x1 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val= 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val= 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=fill 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=0x80 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val= 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=software 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=64 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=64 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=1 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val=Yes 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val= 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.415 23:06:03 -- accel/accel.sh@21 -- # val= 00:06:57.415 23:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.415 23:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 23:06:04 -- accel/accel.sh@21 -- # val= 00:06:58.794 23:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 23:06:04 -- accel/accel.sh@21 -- # val= 00:06:58.794 23:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 23:06:04 -- accel/accel.sh@21 -- # val= 00:06:58.794 23:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 23:06:04 -- accel/accel.sh@21 -- # val= 00:06:58.794 23:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 23:06:04 -- accel/accel.sh@21 -- # val= 00:06:58.794 23:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 23:06:04 -- accel/accel.sh@21 -- # val= 00:06:58.794 23:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.794 23:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 23:06:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.794 23:06:04 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:58.794 23:06:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.794 00:06:58.794 real 0m2.704s 00:06:58.794 user 0m2.469s 00:06:58.794 sys 0m0.234s 00:06:58.794 23:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.794 23:06:04 -- common/autotest_common.sh@10 -- # set +x 00:06:58.794 ************************************ 00:06:58.794 END TEST accel_fill 00:06:58.794 ************************************ 00:06:58.794 23:06:04 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:58.794 23:06:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.794 23:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.794 23:06:04 -- common/autotest_common.sh@10 -- # set +x 00:06:58.794 ************************************ 00:06:58.794 START TEST accel_copy_crc32c 00:06:58.794 ************************************ 00:06:58.794 23:06:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:58.794 23:06:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.794 23:06:04 -- accel/accel.sh@17 -- # local accel_module 00:06:58.794 23:06:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.794 23:06:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.794 23:06:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.794 23:06:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.794 23:06:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.794 23:06:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.794 23:06:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.794 23:06:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.794 23:06:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.794 23:06:04 -- accel/accel.sh@42 -- # jq -r . 00:06:58.794 [2024-11-02 23:06:04.266399] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:58.794 [2024-11-02 23:06:04.266464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461338 ] 00:06:58.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.794 [2024-11-02 23:06:04.335822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.794 [2024-11-02 23:06:04.402886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.173 23:06:05 -- accel/accel.sh@18 -- # out=' 00:07:00.173 SPDK Configuration: 00:07:00.173 Core mask: 0x1 00:07:00.173 00:07:00.173 Accel Perf Configuration: 00:07:00.173 Workload Type: copy_crc32c 00:07:00.173 CRC-32C seed: 0 00:07:00.173 Vector size: 4096 bytes 00:07:00.173 Transfer size: 4096 bytes 00:07:00.173 Vector count 1 00:07:00.173 Module: software 00:07:00.173 Queue depth: 32 00:07:00.173 Allocate depth: 32 00:07:00.173 # threads/core: 1 00:07:00.173 Run time: 1 seconds 00:07:00.173 Verify: Yes 00:07:00.173 00:07:00.173 Running for 1 seconds... 00:07:00.173 00:07:00.173 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.173 ------------------------------------------------------------------------------------ 00:07:00.173 0,0 330112/s 1289 MiB/s 0 0 00:07:00.173 ==================================================================================== 00:07:00.173 Total 330112/s 1289 MiB/s 0 0' 00:07:00.173 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.173 23:06:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:00.173 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.173 23:06:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:00.173 23:06:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.173 23:06:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.173 23:06:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.173 23:06:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.173 23:06:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.173 23:06:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.173 23:06:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.173 23:06:05 -- accel/accel.sh@42 -- # jq -r . 00:07:00.173 [2024-11-02 23:06:05.608174] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:00.173 [2024-11-02 23:06:05.608228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461871 ] 00:07:00.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.173 [2024-11-02 23:06:05.675333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.173 [2024-11-02 23:06:05.739932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.173 23:06:05 -- accel/accel.sh@21 -- # val= 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val= 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=0x1 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val= 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val= 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=0 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val= 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=software 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=32 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=32 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=1 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val=Yes 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val= 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:00.174 23:06:05 -- accel/accel.sh@21 -- # val= 00:07:00.174 23:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:00.174 23:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:01.552 23:06:06 -- accel/accel.sh@21 -- # val= 00:07:01.552 23:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # IFS=: 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # read -r var val 00:07:01.552 23:06:06 -- accel/accel.sh@21 -- # val= 00:07:01.552 23:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # IFS=: 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # read -r var val 00:07:01.552 23:06:06 -- accel/accel.sh@21 -- # val= 00:07:01.552 23:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # IFS=: 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # read -r var val 00:07:01.552 23:06:06 -- accel/accel.sh@21 -- # val= 00:07:01.552 23:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # IFS=: 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # read -r var val 00:07:01.552 23:06:06 -- accel/accel.sh@21 -- # val= 00:07:01.552 23:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # IFS=: 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # read -r var val 00:07:01.552 23:06:06 -- accel/accel.sh@21 -- # val= 00:07:01.552 23:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # IFS=: 00:07:01.552 23:06:06 -- accel/accel.sh@20 -- # read -r var val 00:07:01.552 23:06:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.552 23:06:06 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:01.552 23:06:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.552 00:07:01.552 real 0m2.697s 00:07:01.552 user 0m2.463s 00:07:01.552 sys 0m0.233s 00:07:01.552 23:06:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.552 23:06:06 -- common/autotest_common.sh@10 -- # set +x 00:07:01.552 ************************************ 00:07:01.552 END TEST accel_copy_crc32c 00:07:01.552 ************************************ 00:07:01.552 23:06:06 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.553 23:06:06 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:01.553 23:06:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.553 23:06:06 -- common/autotest_common.sh@10 -- # set +x 00:07:01.553 ************************************ 00:07:01.553 START TEST accel_copy_crc32c_C2 00:07:01.553 ************************************ 00:07:01.553 23:06:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.553 23:06:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.553 23:06:06 -- accel/accel.sh@17 -- # local accel_module 00:07:01.553 23:06:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:01.553 23:06:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:01.553 23:06:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.553 23:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.553 23:06:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.553 23:06:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.553 23:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.553 23:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.553 23:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.553 23:06:06 -- accel/accel.sh@42 -- # jq -r . 00:07:01.553 [2024-11-02 23:06:06.998236] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:01.553 [2024-11-02 23:06:06.998301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462202 ] 00:07:01.553 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.553 [2024-11-02 23:06:07.068238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.553 [2024-11-02 23:06:07.136105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.932 23:06:08 -- accel/accel.sh@18 -- # out=' 00:07:02.932 SPDK Configuration: 00:07:02.932 Core mask: 0x1 00:07:02.932 00:07:02.932 Accel Perf Configuration: 00:07:02.932 Workload Type: copy_crc32c 00:07:02.932 CRC-32C seed: 0 00:07:02.932 Vector size: 4096 bytes 00:07:02.932 Transfer size: 8192 bytes 00:07:02.932 Vector count 2 00:07:02.932 Module: software 00:07:02.932 Queue depth: 32 00:07:02.932 Allocate depth: 32 00:07:02.932 # threads/core: 1 00:07:02.932 Run time: 1 seconds 00:07:02.932 Verify: Yes 00:07:02.932 00:07:02.932 Running for 1 seconds... 00:07:02.932 00:07:02.932 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.932 ------------------------------------------------------------------------------------ 00:07:02.932 0,0 241568/s 1887 MiB/s 0 0 00:07:02.932 ==================================================================================== 00:07:02.932 Total 241568/s 943 MiB/s 0 0' 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:02.932 23:06:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.932 23:06:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.932 23:06:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.932 23:06:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.932 23:06:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.932 23:06:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.932 23:06:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.932 23:06:08 -- accel/accel.sh@42 -- # jq -r . 00:07:02.932 [2024-11-02 23:06:08.341005] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.932 [2024-11-02 23:06:08.341058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462475 ] 00:07:02.932 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.932 [2024-11-02 23:06:08.408540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.932 [2024-11-02 23:06:08.472162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val= 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val= 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val=0x1 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val= 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val= 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val=0 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.932 23:06:08 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:02.932 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.932 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val= 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val=software 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val=32 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val=32 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val=1 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val=Yes 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val= 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:02.933 23:06:08 -- accel/accel.sh@21 -- # val= 00:07:02.933 23:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:07:02.933 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.311 23:06:09 -- accel/accel.sh@21 -- # val= 00:07:04.311 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.311 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:07:04.311 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.311 23:06:09 -- accel/accel.sh@21 -- # val= 00:07:04.311 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.311 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:07:04.311 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.311 23:06:09 -- accel/accel.sh@21 -- # val= 00:07:04.311 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.312 23:06:09 -- accel/accel.sh@21 -- # val= 00:07:04.312 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.312 23:06:09 -- accel/accel.sh@21 -- # val= 00:07:04.312 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.312 23:06:09 -- accel/accel.sh@21 -- # val= 00:07:04.312 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:07:04.312 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.312 23:06:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.312 23:06:09 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:04.312 23:06:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.312 00:07:04.312 real 0m2.694s 00:07:04.312 user 0m2.457s 00:07:04.312 sys 0m0.238s 00:07:04.312 23:06:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.312 23:06:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.312 ************************************ 00:07:04.312 END TEST accel_copy_crc32c_C2 00:07:04.312 ************************************ 00:07:04.312 23:06:09 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:04.312 23:06:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:04.312 23:06:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.312 23:06:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.312 ************************************ 00:07:04.312 START TEST accel_dualcast 00:07:04.312 ************************************ 00:07:04.312 23:06:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:04.312 23:06:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.312 23:06:09 -- accel/accel.sh@17 -- # local accel_module 00:07:04.312 23:06:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:04.312 23:06:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:04.312 23:06:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.312 23:06:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.312 23:06:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.312 23:06:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.312 23:06:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.312 23:06:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.312 23:06:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.312 23:06:09 -- accel/accel.sh@42 -- # jq -r . 00:07:04.312 [2024-11-02 23:06:09.731240] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:04.312 [2024-11-02 23:06:09.731322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462756 ] 00:07:04.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.312 [2024-11-02 23:06:09.802434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.312 [2024-11-02 23:06:09.868257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.690 23:06:11 -- accel/accel.sh@18 -- # out=' 00:07:05.690 SPDK Configuration: 00:07:05.690 Core mask: 0x1 00:07:05.690 00:07:05.690 Accel Perf Configuration: 00:07:05.690 Workload Type: dualcast 00:07:05.691 Transfer size: 4096 bytes 00:07:05.691 Vector count 1 00:07:05.691 Module: software 00:07:05.691 Queue depth: 32 00:07:05.691 Allocate depth: 32 00:07:05.691 # threads/core: 1 00:07:05.691 Run time: 1 seconds 00:07:05.691 Verify: Yes 00:07:05.691 00:07:05.691 Running for 1 seconds... 00:07:05.691 00:07:05.691 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.691 ------------------------------------------------------------------------------------ 00:07:05.691 0,0 511968/s 1999 MiB/s 0 0 00:07:05.691 ==================================================================================== 00:07:05.691 Total 511968/s 1999 MiB/s 0 0' 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.691 23:06:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.691 23:06:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.691 23:06:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.691 23:06:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.691 23:06:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.691 23:06:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.691 23:06:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.691 23:06:11 -- accel/accel.sh@42 -- # jq -r . 00:07:05.691 [2024-11-02 23:06:11.072649] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.691 [2024-11-02 23:06:11.072703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463030 ] 00:07:05.691 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.691 [2024-11-02 23:06:11.139092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.691 [2024-11-02 23:06:11.202390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val= 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val= 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val=0x1 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val= 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val= 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val=dualcast 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val= 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val=software 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val=32 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val=32 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val=1 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val=Yes 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val= 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:05.691 23:06:11 -- accel/accel.sh@21 -- # val= 00:07:05.691 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:05.691 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.070 23:06:12 -- accel/accel.sh@21 -- # val= 00:07:07.070 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:07.070 23:06:12 -- accel/accel.sh@21 -- # val= 00:07:07.070 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:07.070 23:06:12 -- accel/accel.sh@21 -- # val= 00:07:07.070 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:07.070 23:06:12 -- accel/accel.sh@21 -- # val= 00:07:07.070 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:07.070 23:06:12 -- accel/accel.sh@21 -- # val= 00:07:07.070 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:07.070 23:06:12 -- accel/accel.sh@21 -- # val= 00:07:07.070 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:07.070 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:07.070 23:06:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.070 23:06:12 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:07.070 23:06:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.070 00:07:07.070 real 0m2.694s 00:07:07.070 user 0m2.438s 00:07:07.070 sys 0m0.254s 00:07:07.070 23:06:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.070 23:06:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.070 ************************************ 00:07:07.070 END TEST accel_dualcast 00:07:07.070 ************************************ 00:07:07.070 23:06:12 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:07.070 23:06:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:07.070 23:06:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.070 23:06:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.070 ************************************ 00:07:07.070 START TEST accel_compare 00:07:07.070 ************************************ 00:07:07.070 23:06:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:07.070 23:06:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.070 23:06:12 -- accel/accel.sh@17 -- # local accel_module 00:07:07.070 23:06:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:07.070 23:06:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:07.070 23:06:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.070 23:06:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.070 23:06:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.070 23:06:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.070 23:06:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.070 23:06:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.070 23:06:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.070 23:06:12 -- accel/accel.sh@42 -- # jq -r . 00:07:07.070 [2024-11-02 23:06:12.460090] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:07.070 [2024-11-02 23:06:12.460158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463296 ] 00:07:07.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.070 [2024-11-02 23:06:12.527193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.070 [2024-11-02 23:06:12.592113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.450 23:06:13 -- accel/accel.sh@18 -- # out=' 00:07:08.450 SPDK Configuration: 00:07:08.450 Core mask: 0x1 00:07:08.450 00:07:08.450 Accel Perf Configuration: 00:07:08.450 Workload Type: compare 00:07:08.450 Transfer size: 4096 bytes 00:07:08.450 Vector count 1 00:07:08.450 Module: software 00:07:08.450 Queue depth: 32 00:07:08.450 Allocate depth: 32 00:07:08.450 # threads/core: 1 00:07:08.450 Run time: 1 seconds 00:07:08.450 Verify: Yes 00:07:08.450 00:07:08.450 Running for 1 seconds... 00:07:08.450 00:07:08.451 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.451 ------------------------------------------------------------------------------------ 00:07:08.451 0,0 652960/s 2550 MiB/s 0 0 00:07:08.451 ==================================================================================== 00:07:08.451 Total 652960/s 2550 MiB/s 0 0' 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:08.451 23:06:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.451 23:06:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.451 23:06:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.451 23:06:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.451 23:06:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.451 23:06:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.451 23:06:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.451 23:06:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.451 23:06:13 -- accel/accel.sh@42 -- # jq -r . 00:07:08.451 [2024-11-02 23:06:13.798671] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.451 [2024-11-02 23:06:13.798726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463466 ] 00:07:08.451 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.451 [2024-11-02 23:06:13.864525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.451 [2024-11-02 23:06:13.931280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val= 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val= 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val=0x1 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val= 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val= 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val=compare 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val= 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val=software 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val=32 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val=32 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val=1 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val=Yes 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val= 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:08.451 23:06:13 -- accel/accel.sh@21 -- # val= 00:07:08.451 23:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:08.451 23:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:09.388 23:06:15 -- accel/accel.sh@21 -- # val= 00:07:09.388 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:09.388 23:06:15 -- accel/accel.sh@21 -- # val= 00:07:09.388 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:09.388 23:06:15 -- accel/accel.sh@21 -- # val= 00:07:09.388 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:09.388 23:06:15 -- accel/accel.sh@21 -- # val= 00:07:09.388 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:09.388 23:06:15 -- accel/accel.sh@21 -- # val= 00:07:09.388 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:09.388 23:06:15 -- accel/accel.sh@21 -- # val= 00:07:09.388 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:09.388 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:09.388 23:06:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.388 23:06:15 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:09.388 23:06:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.388 00:07:09.388 real 0m2.692s 00:07:09.388 user 0m2.452s 00:07:09.388 sys 0m0.240s 00:07:09.388 23:06:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.388 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:07:09.388 ************************************ 00:07:09.388 END TEST accel_compare 00:07:09.388 ************************************ 00:07:09.647 23:06:15 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:09.647 23:06:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:09.647 23:06:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.647 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:07:09.647 ************************************ 00:07:09.647 START TEST accel_xor 00:07:09.647 ************************************ 00:07:09.647 23:06:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:09.647 23:06:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.647 23:06:15 -- accel/accel.sh@17 -- # local accel_module 00:07:09.647 23:06:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:09.647 23:06:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:09.647 23:06:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.647 23:06:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.647 23:06:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.647 23:06:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.647 23:06:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.647 23:06:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.647 23:06:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.647 23:06:15 -- accel/accel.sh@42 -- # jq -r . 00:07:09.647 [2024-11-02 23:06:15.198238] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:09.647 [2024-11-02 23:06:15.198306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463687 ] 00:07:09.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.647 [2024-11-02 23:06:15.269068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.647 [2024-11-02 23:06:15.340157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.026 23:06:16 -- accel/accel.sh@18 -- # out=' 00:07:11.026 SPDK Configuration: 00:07:11.026 Core mask: 0x1 00:07:11.026 00:07:11.026 Accel Perf Configuration: 00:07:11.026 Workload Type: xor 00:07:11.026 Source buffers: 2 00:07:11.026 Transfer size: 4096 bytes 00:07:11.026 Vector count 1 00:07:11.026 Module: software 00:07:11.026 Queue depth: 32 00:07:11.026 Allocate depth: 32 00:07:11.027 # threads/core: 1 00:07:11.027 Run time: 1 seconds 00:07:11.027 Verify: Yes 00:07:11.027 00:07:11.027 Running for 1 seconds... 00:07:11.027 00:07:11.027 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.027 ------------------------------------------------------------------------------------ 00:07:11.027 0,0 496224/s 1938 MiB/s 0 0 00:07:11.027 ==================================================================================== 00:07:11.027 Total 496224/s 1938 MiB/s 0 0' 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:11.027 23:06:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:11.027 23:06:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.027 23:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.027 23:06:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.027 23:06:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.027 23:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.027 23:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.027 23:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.027 23:06:16 -- accel/accel.sh@42 -- # jq -r . 00:07:11.027 [2024-11-02 23:06:16.545141] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.027 [2024-11-02 23:06:16.545194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463897 ] 00:07:11.027 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.027 [2024-11-02 23:06:16.613667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.027 [2024-11-02 23:06:16.678588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val= 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val= 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=0x1 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val= 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val= 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=xor 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=2 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val= 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=software 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=32 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=32 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=1 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val=Yes 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val= 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:11.027 23:06:16 -- accel/accel.sh@21 -- # val= 00:07:11.027 23:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:11.027 23:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 23:06:17 -- accel/accel.sh@21 -- # val= 00:07:12.406 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 23:06:17 -- accel/accel.sh@21 -- # val= 00:07:12.406 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 23:06:17 -- accel/accel.sh@21 -- # val= 00:07:12.406 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 23:06:17 -- accel/accel.sh@21 -- # val= 00:07:12.406 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 23:06:17 -- accel/accel.sh@21 -- # val= 00:07:12.406 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 23:06:17 -- accel/accel.sh@21 -- # val= 00:07:12.406 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 23:06:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.406 23:06:17 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:12.406 23:06:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.406 00:07:12.406 real 0m2.703s 00:07:12.406 user 0m2.459s 00:07:12.406 sys 0m0.244s 00:07:12.406 23:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.406 23:06:17 -- common/autotest_common.sh@10 -- # set +x 00:07:12.406 ************************************ 00:07:12.406 END TEST accel_xor 00:07:12.406 ************************************ 00:07:12.406 23:06:17 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:12.406 23:06:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:12.406 23:06:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.406 23:06:17 -- common/autotest_common.sh@10 -- # set +x 00:07:12.406 ************************************ 00:07:12.406 START TEST accel_xor 00:07:12.406 ************************************ 00:07:12.406 23:06:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:12.406 23:06:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.406 23:06:17 -- accel/accel.sh@17 -- # local accel_module 00:07:12.406 23:06:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.406 23:06:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.406 23:06:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.406 23:06:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.406 23:06:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.406 23:06:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.406 23:06:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.406 23:06:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.406 23:06:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.406 23:06:17 -- accel/accel.sh@42 -- # jq -r . 00:07:12.406 [2024-11-02 23:06:17.937428] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.406 [2024-11-02 23:06:17.937493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464178 ] 00:07:12.406 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.406 [2024-11-02 23:06:18.005930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.406 [2024-11-02 23:06:18.070909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.787 23:06:19 -- accel/accel.sh@18 -- # out=' 00:07:13.787 SPDK Configuration: 00:07:13.787 Core mask: 0x1 00:07:13.787 00:07:13.787 Accel Perf Configuration: 00:07:13.787 Workload Type: xor 00:07:13.787 Source buffers: 3 00:07:13.787 Transfer size: 4096 bytes 00:07:13.787 Vector count 1 00:07:13.787 Module: software 00:07:13.787 Queue depth: 32 00:07:13.787 Allocate depth: 32 00:07:13.787 # threads/core: 1 00:07:13.787 Run time: 1 seconds 00:07:13.787 Verify: Yes 00:07:13.787 00:07:13.787 Running for 1 seconds... 00:07:13.787 00:07:13.787 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.787 ------------------------------------------------------------------------------------ 00:07:13.787 0,0 467200/s 1825 MiB/s 0 0 00:07:13.787 ==================================================================================== 00:07:13.787 Total 467200/s 1825 MiB/s 0 0' 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:13.787 23:06:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.787 23:06:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.787 23:06:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.787 23:06:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.787 23:06:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.787 23:06:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.787 23:06:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.787 23:06:19 -- accel/accel.sh@42 -- # jq -r . 00:07:13.787 [2024-11-02 23:06:19.275162] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:13.787 [2024-11-02 23:06:19.275212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464450 ] 00:07:13.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.787 [2024-11-02 23:06:19.340315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.787 [2024-11-02 23:06:19.404220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val= 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val= 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=0x1 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val= 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val= 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=xor 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=3 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val= 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=software 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=32 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=32 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=1 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val=Yes 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val= 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.787 23:06:19 -- accel/accel.sh@21 -- # val= 00:07:13.787 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.787 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 23:06:20 -- accel/accel.sh@21 -- # val= 00:07:15.165 23:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 23:06:20 -- accel/accel.sh@21 -- # val= 00:07:15.165 23:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 23:06:20 -- accel/accel.sh@21 -- # val= 00:07:15.165 23:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 23:06:20 -- accel/accel.sh@21 -- # val= 00:07:15.165 23:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 23:06:20 -- accel/accel.sh@21 -- # val= 00:07:15.165 23:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 23:06:20 -- accel/accel.sh@21 -- # val= 00:07:15.165 23:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 23:06:20 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 23:06:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.165 23:06:20 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:15.165 23:06:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.165 00:07:15.165 real 0m2.691s 00:07:15.165 user 0m2.454s 00:07:15.165 sys 0m0.236s 00:07:15.165 23:06:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.165 23:06:20 -- common/autotest_common.sh@10 -- # set +x 00:07:15.165 ************************************ 00:07:15.165 END TEST accel_xor 00:07:15.165 ************************************ 00:07:15.165 23:06:20 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:15.165 23:06:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:15.165 23:06:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.165 23:06:20 -- common/autotest_common.sh@10 -- # set +x 00:07:15.165 ************************************ 00:07:15.165 START TEST accel_dif_verify 00:07:15.165 ************************************ 00:07:15.165 23:06:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:15.165 23:06:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.165 23:06:20 -- accel/accel.sh@17 -- # local accel_module 00:07:15.165 23:06:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:15.165 23:06:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:15.165 23:06:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.165 23:06:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.165 23:06:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.165 23:06:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.165 23:06:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.165 23:06:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.165 23:06:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.165 23:06:20 -- accel/accel.sh@42 -- # jq -r . 00:07:15.165 [2024-11-02 23:06:20.665401] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:15.165 [2024-11-02 23:06:20.665464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464733 ] 00:07:15.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.166 [2024-11-02 23:06:20.735449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.166 [2024-11-02 23:06:20.808953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.545 23:06:22 -- accel/accel.sh@18 -- # out=' 00:07:16.545 SPDK Configuration: 00:07:16.545 Core mask: 0x1 00:07:16.545 00:07:16.545 Accel Perf Configuration: 00:07:16.545 Workload Type: dif_verify 00:07:16.545 Vector size: 4096 bytes 00:07:16.545 Transfer size: 4096 bytes 00:07:16.545 Block size: 512 bytes 00:07:16.545 Metadata size: 8 bytes 00:07:16.545 Vector count 1 00:07:16.545 Module: software 00:07:16.545 Queue depth: 32 00:07:16.545 Allocate depth: 32 00:07:16.545 # threads/core: 1 00:07:16.545 Run time: 1 seconds 00:07:16.545 Verify: No 00:07:16.545 00:07:16.545 Running for 1 seconds... 00:07:16.545 00:07:16.545 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.545 ------------------------------------------------------------------------------------ 00:07:16.545 0,0 138112/s 547 MiB/s 0 0 00:07:16.545 ==================================================================================== 00:07:16.545 Total 138112/s 539 MiB/s 0 0' 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:16.545 23:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.545 23:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.545 23:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.545 23:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.545 23:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.545 23:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.545 23:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.545 23:06:22 -- accel/accel.sh@42 -- # jq -r . 00:07:16.545 [2024-11-02 23:06:22.015358] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:16.545 [2024-11-02 23:06:22.015414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465005 ] 00:07:16.545 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.545 [2024-11-02 23:06:22.082196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.545 [2024-11-02 23:06:22.145813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val= 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val= 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val=0x1 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val= 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val= 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val=dif_verify 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val= 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val=software 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val=32 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val=32 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.545 23:06:22 -- accel/accel.sh@21 -- # val=1 00:07:16.545 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.545 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 23:06:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.546 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 23:06:22 -- accel/accel.sh@21 -- # val=No 00:07:16.546 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 23:06:22 -- accel/accel.sh@21 -- # val= 00:07:16.546 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.546 23:06:22 -- accel/accel.sh@21 -- # val= 00:07:16.546 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.546 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.925 23:06:23 -- accel/accel.sh@21 -- # val= 00:07:17.925 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.925 23:06:23 -- accel/accel.sh@21 -- # val= 00:07:17.925 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.925 23:06:23 -- accel/accel.sh@21 -- # val= 00:07:17.925 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.925 23:06:23 -- accel/accel.sh@21 -- # val= 00:07:17.925 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.925 23:06:23 -- accel/accel.sh@21 -- # val= 00:07:17.925 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.925 23:06:23 -- accel/accel.sh@21 -- # val= 00:07:17.925 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.925 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.925 23:06:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.925 23:06:23 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:17.925 23:06:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.925 00:07:17.925 real 0m2.701s 00:07:17.925 user 0m2.456s 00:07:17.925 sys 0m0.245s 00:07:17.925 23:06:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.925 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 ************************************ 00:07:17.925 END TEST accel_dif_verify 00:07:17.925 ************************************ 00:07:17.925 23:06:23 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:17.925 23:06:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:17.925 23:06:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.925 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 ************************************ 00:07:17.925 START TEST accel_dif_generate 00:07:17.925 ************************************ 00:07:17.925 23:06:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:17.925 23:06:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.925 23:06:23 -- accel/accel.sh@17 -- # local accel_module 00:07:17.925 23:06:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:17.925 23:06:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:17.925 23:06:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.925 23:06:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.925 23:06:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.925 23:06:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.925 23:06:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.925 23:06:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.925 23:06:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.925 23:06:23 -- accel/accel.sh@42 -- # jq -r . 00:07:17.925 [2024-11-02 23:06:23.405538] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:17.925 [2024-11-02 23:06:23.405604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465294 ] 00:07:17.925 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.925 [2024-11-02 23:06:23.474616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.925 [2024-11-02 23:06:23.540010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.318 23:06:24 -- accel/accel.sh@18 -- # out=' 00:07:19.318 SPDK Configuration: 00:07:19.318 Core mask: 0x1 00:07:19.318 00:07:19.318 Accel Perf Configuration: 00:07:19.318 Workload Type: dif_generate 00:07:19.318 Vector size: 4096 bytes 00:07:19.318 Transfer size: 4096 bytes 00:07:19.318 Block size: 512 bytes 00:07:19.318 Metadata size: 8 bytes 00:07:19.318 Vector count 1 00:07:19.318 Module: software 00:07:19.318 Queue depth: 32 00:07:19.318 Allocate depth: 32 00:07:19.318 # threads/core: 1 00:07:19.318 Run time: 1 seconds 00:07:19.318 Verify: No 00:07:19.318 00:07:19.318 Running for 1 seconds... 00:07:19.318 00:07:19.318 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.318 ------------------------------------------------------------------------------------ 00:07:19.318 0,0 164384/s 652 MiB/s 0 0 00:07:19.318 ==================================================================================== 00:07:19.318 Total 164384/s 642 MiB/s 0 0' 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:19.318 23:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.318 23:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.318 23:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.318 23:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.318 23:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.318 23:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.318 23:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.318 23:06:24 -- accel/accel.sh@42 -- # jq -r . 00:07:19.318 [2024-11-02 23:06:24.744053] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:19.318 [2024-11-02 23:06:24.744106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465560 ] 00:07:19.318 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.318 [2024-11-02 23:06:24.809715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.318 [2024-11-02 23:06:24.873665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val= 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val= 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val=0x1 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val= 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val= 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val=dif_generate 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val= 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.318 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.318 23:06:24 -- accel/accel.sh@21 -- # val=software 00:07:19.318 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.319 23:06:24 -- accel/accel.sh@21 -- # val=32 00:07:19.319 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.319 23:06:24 -- accel/accel.sh@21 -- # val=32 00:07:19.319 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.319 23:06:24 -- accel/accel.sh@21 -- # val=1 00:07:19.319 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.319 23:06:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.319 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.319 23:06:24 -- accel/accel.sh@21 -- # val=No 00:07:19.319 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.319 23:06:24 -- accel/accel.sh@21 -- # val= 00:07:19.319 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.319 23:06:24 -- accel/accel.sh@21 -- # val= 00:07:19.319 23:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.319 23:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.697 23:06:26 -- accel/accel.sh@21 -- # val= 00:07:20.697 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.697 23:06:26 -- accel/accel.sh@21 -- # val= 00:07:20.697 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.697 23:06:26 -- accel/accel.sh@21 -- # val= 00:07:20.697 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.697 23:06:26 -- accel/accel.sh@21 -- # val= 00:07:20.697 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.697 23:06:26 -- accel/accel.sh@21 -- # val= 00:07:20.697 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.697 23:06:26 -- accel/accel.sh@21 -- # val= 00:07:20.697 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.697 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.697 23:06:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.697 23:06:26 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:20.697 23:06:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.697 00:07:20.697 real 0m2.689s 00:07:20.697 user 0m2.433s 00:07:20.697 sys 0m0.256s 00:07:20.697 23:06:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.697 23:06:26 -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 ************************************ 00:07:20.697 END TEST accel_dif_generate 00:07:20.697 ************************************ 00:07:20.697 23:06:26 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:20.697 23:06:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:20.697 23:06:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.697 23:06:26 -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 ************************************ 00:07:20.697 START TEST accel_dif_generate_copy 00:07:20.697 ************************************ 00:07:20.697 23:06:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:20.697 23:06:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.697 23:06:26 -- accel/accel.sh@17 -- # local accel_module 00:07:20.697 23:06:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:20.697 23:06:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:20.697 23:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.697 23:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.697 23:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.697 23:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.697 23:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.697 23:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.697 23:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.697 23:06:26 -- accel/accel.sh@42 -- # jq -r . 00:07:20.697 [2024-11-02 23:06:26.134140] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:20.697 [2024-11-02 23:06:26.134208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465777 ] 00:07:20.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.697 [2024-11-02 23:06:26.203259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.697 [2024-11-02 23:06:26.268106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.078 23:06:27 -- accel/accel.sh@18 -- # out=' 00:07:22.078 SPDK Configuration: 00:07:22.078 Core mask: 0x1 00:07:22.078 00:07:22.078 Accel Perf Configuration: 00:07:22.078 Workload Type: dif_generate_copy 00:07:22.078 Vector size: 4096 bytes 00:07:22.078 Transfer size: 4096 bytes 00:07:22.078 Vector count 1 00:07:22.078 Module: software 00:07:22.078 Queue depth: 32 00:07:22.078 Allocate depth: 32 00:07:22.078 # threads/core: 1 00:07:22.078 Run time: 1 seconds 00:07:22.078 Verify: No 00:07:22.078 00:07:22.078 Running for 1 seconds... 00:07:22.078 00:07:22.078 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.078 ------------------------------------------------------------------------------------ 00:07:22.078 0,0 130240/s 516 MiB/s 0 0 00:07:22.078 ==================================================================================== 00:07:22.078 Total 130240/s 508 MiB/s 0 0' 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:22.078 23:06:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.078 23:06:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.078 23:06:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.078 23:06:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.078 23:06:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.078 23:06:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.078 23:06:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.078 23:06:27 -- accel/accel.sh@42 -- # jq -r . 00:07:22.078 [2024-11-02 23:06:27.472161] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:22.078 [2024-11-02 23:06:27.472216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465956 ] 00:07:22.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.078 [2024-11-02 23:06:27.538414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.078 [2024-11-02 23:06:27.602298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val= 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val= 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val=0x1 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val= 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val= 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val= 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val=software 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val=32 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.078 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.078 23:06:27 -- accel/accel.sh@21 -- # val=32 00:07:22.078 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.079 23:06:27 -- accel/accel.sh@21 -- # val=1 00:07:22.079 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.079 23:06:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.079 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.079 23:06:27 -- accel/accel.sh@21 -- # val=No 00:07:22.079 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.079 23:06:27 -- accel/accel.sh@21 -- # val= 00:07:22.079 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.079 23:06:27 -- accel/accel.sh@21 -- # val= 00:07:22.079 23:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.079 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.458 23:06:28 -- accel/accel.sh@21 -- # val= 00:07:23.458 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.458 23:06:28 -- accel/accel.sh@21 -- # val= 00:07:23.458 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.458 23:06:28 -- accel/accel.sh@21 -- # val= 00:07:23.458 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.458 23:06:28 -- accel/accel.sh@21 -- # val= 00:07:23.458 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.458 23:06:28 -- accel/accel.sh@21 -- # val= 00:07:23.458 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.458 23:06:28 -- accel/accel.sh@21 -- # val= 00:07:23.458 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.458 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.458 23:06:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.458 23:06:28 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:23.458 23:06:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.458 00:07:23.458 real 0m2.690s 00:07:23.458 user 0m2.448s 00:07:23.458 sys 0m0.242s 00:07:23.458 23:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.458 23:06:28 -- common/autotest_common.sh@10 -- # set +x 00:07:23.458 ************************************ 00:07:23.458 END TEST accel_dif_generate_copy 00:07:23.458 ************************************ 00:07:23.458 23:06:28 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:23.458 23:06:28 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.458 23:06:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:23.458 23:06:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.458 23:06:28 -- common/autotest_common.sh@10 -- # set +x 00:07:23.458 ************************************ 00:07:23.458 START TEST accel_comp 00:07:23.458 ************************************ 00:07:23.458 23:06:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.458 23:06:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.458 23:06:28 -- accel/accel.sh@17 -- # local accel_module 00:07:23.458 23:06:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.458 23:06:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.458 23:06:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.458 23:06:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.458 23:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.458 23:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.458 23:06:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.458 23:06:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.458 23:06:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.458 23:06:28 -- accel/accel.sh@42 -- # jq -r . 00:07:23.458 [2024-11-02 23:06:28.863090] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:23.458 [2024-11-02 23:06:28.863157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466170 ] 00:07:23.458 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.458 [2024-11-02 23:06:28.933189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.458 [2024-11-02 23:06:28.998908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.852 23:06:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:24.852 00:07:24.852 SPDK Configuration: 00:07:24.852 Core mask: 0x1 00:07:24.852 00:07:24.852 Accel Perf Configuration: 00:07:24.852 Workload Type: compress 00:07:24.852 Transfer size: 4096 bytes 00:07:24.852 Vector count 1 00:07:24.852 Module: software 00:07:24.852 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:24.852 Queue depth: 32 00:07:24.852 Allocate depth: 32 00:07:24.852 # threads/core: 1 00:07:24.852 Run time: 1 seconds 00:07:24.852 Verify: No 00:07:24.852 00:07:24.852 Running for 1 seconds... 00:07:24.852 00:07:24.852 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.852 ------------------------------------------------------------------------------------ 00:07:24.852 0,0 65696/s 273 MiB/s 0 0 00:07:24.852 ==================================================================================== 00:07:24.852 Total 65696/s 256 MiB/s 0 0' 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:24.852 23:06:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:24.852 23:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.852 23:06:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.852 23:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.852 23:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.852 23:06:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.852 23:06:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.852 23:06:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.852 23:06:30 -- accel/accel.sh@42 -- # jq -r . 00:07:24.852 [2024-11-02 23:06:30.224704] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:24.852 [2024-11-02 23:06:30.224788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466422 ] 00:07:24.852 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.852 [2024-11-02 23:06:30.294555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.852 [2024-11-02 23:06:30.360511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=0x1 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=compress 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=software 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=32 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=32 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=1 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val=No 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.852 23:06:30 -- accel/accel.sh@21 -- # val= 00:07:24.852 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.852 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.833 23:06:31 -- accel/accel.sh@21 -- # val= 00:07:25.833 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:25.833 23:06:31 -- accel/accel.sh@21 -- # val= 00:07:25.833 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:25.833 23:06:31 -- accel/accel.sh@21 -- # val= 00:07:25.833 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:25.833 23:06:31 -- accel/accel.sh@21 -- # val= 00:07:25.833 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:25.833 23:06:31 -- accel/accel.sh@21 -- # val= 00:07:25.833 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:25.833 23:06:31 -- accel/accel.sh@21 -- # val= 00:07:25.833 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:25.833 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:25.833 23:06:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.833 23:06:31 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:25.833 23:06:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.833 00:07:25.833 real 0m2.721s 00:07:25.833 user 0m2.475s 00:07:25.833 sys 0m0.246s 00:07:25.833 23:06:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.833 23:06:31 -- common/autotest_common.sh@10 -- # set +x 00:07:25.833 ************************************ 00:07:25.833 END TEST accel_comp 00:07:25.833 ************************************ 00:07:26.092 23:06:31 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:26.092 23:06:31 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:26.092 23:06:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.092 23:06:31 -- common/autotest_common.sh@10 -- # set +x 00:07:26.092 ************************************ 00:07:26.092 START TEST accel_decomp 00:07:26.092 ************************************ 00:07:26.092 23:06:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:26.092 23:06:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.092 23:06:31 -- accel/accel.sh@17 -- # local accel_module 00:07:26.092 23:06:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:26.092 23:06:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:26.092 23:06:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.092 23:06:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.092 23:06:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.092 23:06:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.092 23:06:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.092 23:06:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.092 23:06:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.092 23:06:31 -- accel/accel.sh@42 -- # jq -r . 00:07:26.092 [2024-11-02 23:06:31.624749] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:26.092 [2024-11-02 23:06:31.624824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466713 ] 00:07:26.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.092 [2024-11-02 23:06:31.693618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.092 [2024-11-02 23:06:31.758453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.471 23:06:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.471 00:07:27.471 SPDK Configuration: 00:07:27.471 Core mask: 0x1 00:07:27.471 00:07:27.471 Accel Perf Configuration: 00:07:27.471 Workload Type: decompress 00:07:27.471 Transfer size: 4096 bytes 00:07:27.471 Vector count 1 00:07:27.471 Module: software 00:07:27.471 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.471 Queue depth: 32 00:07:27.471 Allocate depth: 32 00:07:27.471 # threads/core: 1 00:07:27.471 Run time: 1 seconds 00:07:27.471 Verify: Yes 00:07:27.471 00:07:27.471 Running for 1 seconds... 00:07:27.471 00:07:27.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.471 ------------------------------------------------------------------------------------ 00:07:27.471 0,0 85024/s 156 MiB/s 0 0 00:07:27.471 ==================================================================================== 00:07:27.471 Total 85024/s 332 MiB/s 0 0' 00:07:27.471 23:06:32 -- accel/accel.sh@20 -- # IFS=: 00:07:27.471 23:06:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.471 23:06:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.471 23:06:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.471 23:06:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.471 23:06:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.471 23:06:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.471 23:06:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.471 23:06:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.471 23:06:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.471 23:06:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.471 23:06:32 -- accel/accel.sh@42 -- # jq -r . 00:07:27.471 [2024-11-02 23:06:32.964512] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:27.471 [2024-11-02 23:06:32.964563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466979 ] 00:07:27.471 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.471 [2024-11-02 23:06:33.030489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.471 [2024-11-02 23:06:33.093977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.471 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.471 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.471 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.471 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.471 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.471 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.471 23:06:33 -- accel/accel.sh@21 -- # val=0x1 00:07:27.471 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.471 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.471 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.471 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val=decompress 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val=software 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val=32 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val=32 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val=1 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val=Yes 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.472 23:06:33 -- accel/accel.sh@21 -- # val= 00:07:27.472 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.472 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.851 23:06:34 -- accel/accel.sh@21 -- # val= 00:07:28.851 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.851 23:06:34 -- accel/accel.sh@21 -- # val= 00:07:28.851 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.851 23:06:34 -- accel/accel.sh@21 -- # val= 00:07:28.851 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.851 23:06:34 -- accel/accel.sh@21 -- # val= 00:07:28.851 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.851 23:06:34 -- accel/accel.sh@21 -- # val= 00:07:28.851 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.851 23:06:34 -- accel/accel.sh@21 -- # val= 00:07:28.851 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:28.851 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.851 23:06:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.851 23:06:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.851 23:06:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.851 00:07:28.851 real 0m2.693s 00:07:28.851 user 0m2.456s 00:07:28.851 sys 0m0.237s 00:07:28.851 23:06:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.851 23:06:34 -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 ************************************ 00:07:28.851 END TEST accel_decomp 00:07:28.851 ************************************ 00:07:28.851 23:06:34 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.851 23:06:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:28.851 23:06:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.851 23:06:34 -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 ************************************ 00:07:28.851 START TEST accel_decmop_full 00:07:28.851 ************************************ 00:07:28.851 23:06:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.851 23:06:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.851 23:06:34 -- accel/accel.sh@17 -- # local accel_module 00:07:28.851 23:06:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.851 23:06:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.851 23:06:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.851 23:06:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.851 23:06:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.851 23:06:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.851 23:06:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.851 23:06:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.851 23:06:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.851 23:06:34 -- accel/accel.sh@42 -- # jq -r . 00:07:28.851 [2024-11-02 23:06:34.356935] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:28.851 [2024-11-02 23:06:34.357019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467260 ] 00:07:28.851 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.851 [2024-11-02 23:06:34.426347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.851 [2024-11-02 23:06:34.488374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.230 23:06:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.230 00:07:30.230 SPDK Configuration: 00:07:30.230 Core mask: 0x1 00:07:30.230 00:07:30.230 Accel Perf Configuration: 00:07:30.230 Workload Type: decompress 00:07:30.230 Transfer size: 111250 bytes 00:07:30.230 Vector count 1 00:07:30.230 Module: software 00:07:30.230 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:30.230 Queue depth: 32 00:07:30.230 Allocate depth: 32 00:07:30.230 # threads/core: 1 00:07:30.230 Run time: 1 seconds 00:07:30.230 Verify: Yes 00:07:30.230 00:07:30.230 Running for 1 seconds... 00:07:30.230 00:07:30.230 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.230 ------------------------------------------------------------------------------------ 00:07:30.230 0,0 5856/s 241 MiB/s 0 0 00:07:30.230 ==================================================================================== 00:07:30.230 Total 5856/s 621 MiB/s 0 0' 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.230 23:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.230 23:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.230 23:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.230 23:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.230 23:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.230 23:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.230 23:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.230 23:06:35 -- accel/accel.sh@42 -- # jq -r . 00:07:30.230 [2024-11-02 23:06:35.705403] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:30.230 [2024-11-02 23:06:35.705456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467538 ] 00:07:30.230 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.230 [2024-11-02 23:06:35.772759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.230 [2024-11-02 23:06:35.836237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=0x1 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=decompress 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=software 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=32 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=32 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=1 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val=Yes 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.230 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.230 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.230 23:06:35 -- accel/accel.sh@21 -- # val= 00:07:30.231 23:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.231 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.231 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.611 23:06:37 -- accel/accel.sh@21 -- # val= 00:07:31.611 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.611 23:06:37 -- accel/accel.sh@21 -- # val= 00:07:31.611 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.611 23:06:37 -- accel/accel.sh@21 -- # val= 00:07:31.611 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.611 23:06:37 -- accel/accel.sh@21 -- # val= 00:07:31.611 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.611 23:06:37 -- accel/accel.sh@21 -- # val= 00:07:31.611 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.611 23:06:37 -- accel/accel.sh@21 -- # val= 00:07:31.611 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.611 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.611 23:06:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.611 23:06:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.611 23:06:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.611 00:07:31.611 real 0m2.708s 00:07:31.611 user 0m2.452s 00:07:31.611 sys 0m0.254s 00:07:31.611 23:06:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.611 23:06:37 -- common/autotest_common.sh@10 -- # set +x 00:07:31.611 ************************************ 00:07:31.611 END TEST accel_decmop_full 00:07:31.611 ************************************ 00:07:31.611 23:06:37 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.611 23:06:37 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:31.611 23:06:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.611 23:06:37 -- common/autotest_common.sh@10 -- # set +x 00:07:31.611 ************************************ 00:07:31.611 START TEST accel_decomp_mcore 00:07:31.611 ************************************ 00:07:31.611 23:06:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.611 23:06:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.611 23:06:37 -- accel/accel.sh@17 -- # local accel_module 00:07:31.611 23:06:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.611 23:06:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.611 23:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.611 23:06:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.611 23:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.611 23:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.611 23:06:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.611 23:06:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.611 23:06:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.611 23:06:37 -- accel/accel.sh@42 -- # jq -r . 00:07:31.611 [2024-11-02 23:06:37.105453] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:31.611 [2024-11-02 23:06:37.105538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467820 ] 00:07:31.611 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.611 [2024-11-02 23:06:37.176653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.611 [2024-11-02 23:06:37.241192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.611 [2024-11-02 23:06:37.241289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.611 [2024-11-02 23:06:37.241351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.611 [2024-11-02 23:06:37.241353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.990 23:06:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.990 00:07:32.990 SPDK Configuration: 00:07:32.990 Core mask: 0xf 00:07:32.990 00:07:32.990 Accel Perf Configuration: 00:07:32.990 Workload Type: decompress 00:07:32.990 Transfer size: 4096 bytes 00:07:32.990 Vector count 1 00:07:32.990 Module: software 00:07:32.990 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:32.990 Queue depth: 32 00:07:32.990 Allocate depth: 32 00:07:32.990 # threads/core: 1 00:07:32.990 Run time: 1 seconds 00:07:32.990 Verify: Yes 00:07:32.990 00:07:32.990 Running for 1 seconds... 00:07:32.990 00:07:32.990 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.990 ------------------------------------------------------------------------------------ 00:07:32.990 0,0 69632/s 128 MiB/s 0 0 00:07:32.990 3,0 73472/s 135 MiB/s 0 0 00:07:32.990 2,0 73344/s 135 MiB/s 0 0 00:07:32.991 1,0 73472/s 135 MiB/s 0 0 00:07:32.991 ==================================================================================== 00:07:32.991 Total 289920/s 1132 MiB/s 0 0' 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.991 23:06:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.991 23:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.991 23:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.991 23:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.991 23:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.991 23:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.991 23:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.991 23:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.991 23:06:38 -- accel/accel.sh@42 -- # jq -r . 00:07:32.991 [2024-11-02 23:06:38.473172] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:32.991 [2024-11-02 23:06:38.473252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468038 ] 00:07:32.991 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.991 [2024-11-02 23:06:38.542844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.991 [2024-11-02 23:06:38.609781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.991 [2024-11-02 23:06:38.609878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.991 [2024-11-02 23:06:38.609985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.991 [2024-11-02 23:06:38.609987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=0xf 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=decompress 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=software 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=32 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=32 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=1 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val=Yes 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:32.991 23:06:38 -- accel/accel.sh@21 -- # val= 00:07:32.991 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:32.991 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@21 -- # val= 00:07:34.370 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.370 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.370 23:06:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.370 23:06:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.370 23:06:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.370 00:07:34.370 real 0m2.743s 00:07:34.370 user 0m9.142s 00:07:34.370 sys 0m0.271s 00:07:34.370 23:06:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.370 23:06:39 -- common/autotest_common.sh@10 -- # set +x 00:07:34.370 ************************************ 00:07:34.370 END TEST accel_decomp_mcore 00:07:34.370 ************************************ 00:07:34.370 23:06:39 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.370 23:06:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:34.370 23:06:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.370 23:06:39 -- common/autotest_common.sh@10 -- # set +x 00:07:34.370 ************************************ 00:07:34.370 START TEST accel_decomp_full_mcore 00:07:34.370 ************************************ 00:07:34.370 23:06:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.370 23:06:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.370 23:06:39 -- accel/accel.sh@17 -- # local accel_module 00:07:34.370 23:06:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.370 23:06:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.370 23:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.370 23:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.370 23:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.370 23:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.370 23:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.370 23:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.370 23:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.370 23:06:39 -- accel/accel.sh@42 -- # jq -r . 00:07:34.370 [2024-11-02 23:06:39.897577] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:34.370 [2024-11-02 23:06:39.897660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468267 ] 00:07:34.370 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.370 [2024-11-02 23:06:39.967242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.370 [2024-11-02 23:06:40.041262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.370 [2024-11-02 23:06:40.041278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.370 [2024-11-02 23:06:40.041299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.370 [2024-11-02 23:06:40.041301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.749 23:06:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.749 00:07:35.749 SPDK Configuration: 00:07:35.749 Core mask: 0xf 00:07:35.749 00:07:35.749 Accel Perf Configuration: 00:07:35.749 Workload Type: decompress 00:07:35.749 Transfer size: 111250 bytes 00:07:35.749 Vector count 1 00:07:35.749 Module: software 00:07:35.749 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:35.749 Queue depth: 32 00:07:35.749 Allocate depth: 32 00:07:35.749 # threads/core: 1 00:07:35.749 Run time: 1 seconds 00:07:35.749 Verify: Yes 00:07:35.749 00:07:35.749 Running for 1 seconds... 00:07:35.749 00:07:35.749 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.749 ------------------------------------------------------------------------------------ 00:07:35.749 0,0 5376/s 222 MiB/s 0 0 00:07:35.749 3,0 5696/s 235 MiB/s 0 0 00:07:35.749 2,0 5696/s 235 MiB/s 0 0 00:07:35.749 1,0 5696/s 235 MiB/s 0 0 00:07:35.749 ==================================================================================== 00:07:35.749 Total 22464/s 2383 MiB/s 0 0' 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.749 23:06:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.749 23:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.749 23:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.749 23:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.749 23:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.749 23:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.749 23:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.749 23:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.749 23:06:41 -- accel/accel.sh@42 -- # jq -r . 00:07:35.749 [2024-11-02 23:06:41.277035] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:35.749 [2024-11-02 23:06:41.277102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468451 ] 00:07:35.749 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.749 [2024-11-02 23:06:41.347118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.749 [2024-11-02 23:06:41.421059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.749 [2024-11-02 23:06:41.421074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.749 [2024-11-02 23:06:41.421144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.749 [2024-11-02 23:06:41.421142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=0xf 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=decompress 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=software 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=32 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=32 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=1 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val=Yes 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.749 23:06:41 -- accel/accel.sh@21 -- # val= 00:07:35.749 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.749 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@21 -- # val= 00:07:37.128 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.128 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.128 23:06:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.128 23:06:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.128 23:06:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.128 00:07:37.128 real 0m2.772s 00:07:37.128 user 0m9.202s 00:07:37.128 sys 0m0.289s 00:07:37.128 23:06:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.128 23:06:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.128 ************************************ 00:07:37.128 END TEST accel_decomp_full_mcore 00:07:37.128 ************************************ 00:07:37.128 23:06:42 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.128 23:06:42 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:37.128 23:06:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.128 23:06:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.128 ************************************ 00:07:37.128 START TEST accel_decomp_mthread 00:07:37.128 ************************************ 00:07:37.128 23:06:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.128 23:06:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.128 23:06:42 -- accel/accel.sh@17 -- # local accel_module 00:07:37.128 23:06:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.128 23:06:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.128 23:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.128 23:06:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.128 23:06:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.128 23:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.128 23:06:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.128 23:06:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.128 23:06:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.128 23:06:42 -- accel/accel.sh@42 -- # jq -r . 00:07:37.128 [2024-11-02 23:06:42.720532] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:37.128 [2024-11-02 23:06:42.720598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468709 ] 00:07:37.128 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.128 [2024-11-02 23:06:42.791305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.128 [2024-11-02 23:06:42.859540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.506 23:06:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.506 00:07:38.506 SPDK Configuration: 00:07:38.506 Core mask: 0x1 00:07:38.506 00:07:38.506 Accel Perf Configuration: 00:07:38.506 Workload Type: decompress 00:07:38.506 Transfer size: 4096 bytes 00:07:38.506 Vector count 1 00:07:38.506 Module: software 00:07:38.506 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:38.506 Queue depth: 32 00:07:38.506 Allocate depth: 32 00:07:38.506 # threads/core: 2 00:07:38.506 Run time: 1 seconds 00:07:38.506 Verify: Yes 00:07:38.506 00:07:38.506 Running for 1 seconds... 00:07:38.506 00:07:38.506 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.506 ------------------------------------------------------------------------------------ 00:07:38.506 0,1 44640/s 82 MiB/s 0 0 00:07:38.506 0,0 44480/s 81 MiB/s 0 0 00:07:38.506 ==================================================================================== 00:07:38.506 Total 89120/s 348 MiB/s 0 0' 00:07:38.506 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.506 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.506 23:06:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.506 23:06:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.506 23:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.506 23:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.506 23:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.506 23:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.506 23:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.506 23:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.506 23:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.506 23:06:44 -- accel/accel.sh@42 -- # jq -r . 00:07:38.506 [2024-11-02 23:06:44.084709] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:38.506 [2024-11-02 23:06:44.084776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468968 ] 00:07:38.506 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.506 [2024-11-02 23:06:44.152599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.506 [2024-11-02 23:06:44.216009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.506 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.506 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.506 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.507 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.507 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.507 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.507 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.507 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.507 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.507 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.507 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.507 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.507 23:06:44 -- accel/accel.sh@21 -- # val=0x1 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val=decompress 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val=software 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val=32 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val=32 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val=2 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val=Yes 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.766 23:06:44 -- accel/accel.sh@21 -- # val= 00:07:38.766 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.766 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@21 -- # val= 00:07:39.704 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@21 -- # val= 00:07:39.704 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@21 -- # val= 00:07:39.704 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@21 -- # val= 00:07:39.704 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@21 -- # val= 00:07:39.704 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@21 -- # val= 00:07:39.704 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@21 -- # val= 00:07:39.704 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.704 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.704 23:06:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.704 23:06:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.704 23:06:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.704 00:07:39.704 real 0m2.730s 00:07:39.704 user 0m2.479s 00:07:39.704 sys 0m0.261s 00:07:39.704 23:06:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.704 23:06:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.704 ************************************ 00:07:39.704 END TEST accel_decomp_mthread 00:07:39.704 ************************************ 00:07:39.964 23:06:45 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.964 23:06:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:39.964 23:06:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.964 23:06:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.964 ************************************ 00:07:39.964 START TEST accel_deomp_full_mthread 00:07:39.964 ************************************ 00:07:39.964 23:06:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.964 23:06:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.964 23:06:45 -- accel/accel.sh@17 -- # local accel_module 00:07:39.964 23:06:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.964 23:06:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.964 23:06:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.964 23:06:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.964 23:06:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.964 23:06:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.964 23:06:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.964 23:06:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.964 23:06:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.964 23:06:45 -- accel/accel.sh@42 -- # jq -r . 00:07:39.964 [2024-11-02 23:06:45.484962] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:39.964 [2024-11-02 23:06:45.485029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469253 ] 00:07:39.964 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.964 [2024-11-02 23:06:45.546729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.964 [2024-11-02 23:06:45.612117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.343 23:06:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.343 00:07:41.343 SPDK Configuration: 00:07:41.343 Core mask: 0x1 00:07:41.343 00:07:41.343 Accel Perf Configuration: 00:07:41.343 Workload Type: decompress 00:07:41.343 Transfer size: 111250 bytes 00:07:41.343 Vector count 1 00:07:41.343 Module: software 00:07:41.343 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.343 Queue depth: 32 00:07:41.344 Allocate depth: 32 00:07:41.344 # threads/core: 2 00:07:41.344 Run time: 1 seconds 00:07:41.344 Verify: Yes 00:07:41.344 00:07:41.344 Running for 1 seconds... 00:07:41.344 00:07:41.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.344 ------------------------------------------------------------------------------------ 00:07:41.344 0,1 2912/s 120 MiB/s 0 0 00:07:41.344 0,0 2880/s 118 MiB/s 0 0 00:07:41.344 ==================================================================================== 00:07:41.344 Total 5792/s 614 MiB/s 0 0' 00:07:41.344 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.344 23:06:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.344 23:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.344 23:06:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.344 23:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.344 23:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.344 23:06:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.344 23:06:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.344 23:06:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.344 23:06:46 -- accel/accel.sh@42 -- # jq -r . 00:07:41.344 [2024-11-02 23:06:46.853418] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:41.344 [2024-11-02 23:06:46.853494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469521 ] 00:07:41.344 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.344 [2024-11-02 23:06:46.921835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.344 [2024-11-02 23:06:46.985793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=0x1 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=decompress 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=software 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=32 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=32 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=2 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val=Yes 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.344 23:06:47 -- accel/accel.sh@21 -- # val= 00:07:41.344 23:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.344 23:06:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@21 -- # val= 00:07:42.723 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@21 -- # val= 00:07:42.723 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@21 -- # val= 00:07:42.723 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@21 -- # val= 00:07:42.723 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@21 -- # val= 00:07:42.723 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@21 -- # val= 00:07:42.723 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@21 -- # val= 00:07:42.723 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:42.723 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:42.723 23:06:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.723 23:06:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:42.723 23:06:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.723 00:07:42.723 real 0m2.739s 00:07:42.723 user 0m2.499s 00:07:42.723 sys 0m0.248s 00:07:42.723 23:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.723 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.723 ************************************ 00:07:42.723 END TEST accel_deomp_full_mthread 00:07:42.723 ************************************ 00:07:42.723 23:06:48 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:42.723 23:06:48 -- accel/accel.sh@129 -- # build_accel_config 00:07:42.723 23:06:48 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.723 23:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.723 23:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.723 23:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.723 23:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.723 23:06:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:42.723 23:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.723 23:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.723 23:06:48 -- accel/accel.sh@42 -- # jq -r . 00:07:42.723 23:06:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.724 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.724 ************************************ 00:07:42.724 START TEST accel_dif_functional_tests 00:07:42.724 ************************************ 00:07:42.724 23:06:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.724 [2024-11-02 23:06:48.280114] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:42.724 [2024-11-02 23:06:48.280166] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469813 ] 00:07:42.724 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.724 [2024-11-02 23:06:48.346162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.724 [2024-11-02 23:06:48.413642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.724 [2024-11-02 23:06:48.413737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.724 [2024-11-02 23:06:48.413740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.983 00:07:42.983 00:07:42.983 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.983 http://cunit.sourceforge.net/ 00:07:42.983 00:07:42.983 00:07:42.983 Suite: accel_dif 00:07:42.983 Test: verify: DIF generated, GUARD check ...passed 00:07:42.983 Test: verify: DIF generated, APPTAG check ...passed 00:07:42.983 Test: verify: DIF generated, REFTAG check ...passed 00:07:42.983 Test: verify: DIF not generated, GUARD check ...[2024-11-02 23:06:48.481641] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.983 [2024-11-02 23:06:48.481691] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.983 passed 00:07:42.983 Test: verify: DIF not generated, APPTAG check ...[2024-11-02 23:06:48.481721] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.983 [2024-11-02 23:06:48.481738] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.983 passed 00:07:42.983 Test: verify: DIF not generated, REFTAG check ...[2024-11-02 23:06:48.481758] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.983 [2024-11-02 23:06:48.481775] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.983 passed 00:07:42.983 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:42.983 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-02 23:06:48.481820] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:42.983 passed 00:07:42.983 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:42.983 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:42.983 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:42.983 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-02 23:06:48.481927] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:42.983 passed 00:07:42.983 Test: generate copy: DIF generated, GUARD check ...passed 00:07:42.983 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:42.983 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:42.983 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:42.983 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:42.983 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:42.983 Test: generate copy: iovecs-len validate ...[2024-11-02 23:06:48.482103] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:42.983 passed 00:07:42.983 Test: generate copy: buffer alignment validate ...passed 00:07:42.983 00:07:42.983 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.983 suites 1 1 n/a 0 0 00:07:42.983 tests 20 20 20 0 0 00:07:42.983 asserts 204 204 204 0 n/a 00:07:42.983 00:07:42.983 Elapsed time = 0.002 seconds 00:07:42.983 00:07:42.983 real 0m0.412s 00:07:42.983 user 0m0.603s 00:07:42.983 sys 0m0.149s 00:07:42.983 23:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.983 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.983 ************************************ 00:07:42.983 END TEST accel_dif_functional_tests 00:07:42.983 ************************************ 00:07:42.983 00:07:42.983 real 0m57.912s 00:07:42.983 user 1m5.665s 00:07:42.983 sys 0m6.836s 00:07:42.983 23:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.983 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.983 ************************************ 00:07:42.983 END TEST accel 00:07:42.983 ************************************ 00:07:43.242 23:06:48 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:43.242 23:06:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.242 23:06:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.242 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.242 ************************************ 00:07:43.242 START TEST accel_rpc 00:07:43.242 ************************************ 00:07:43.242 23:06:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:43.242 * Looking for test storage... 00:07:43.242 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:43.242 23:06:48 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:43.242 23:06:48 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=469887 00:07:43.242 23:06:48 -- accel/accel_rpc.sh@15 -- # waitforlisten 469887 00:07:43.242 23:06:48 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:43.242 23:06:48 -- common/autotest_common.sh@819 -- # '[' -z 469887 ']' 00:07:43.242 23:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.242 23:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.242 23:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.242 23:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.242 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.242 [2024-11-02 23:06:48.912470] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:43.242 [2024-11-02 23:06:48.912525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469887 ] 00:07:43.242 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.242 [2024-11-02 23:06:48.982683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.500 [2024-11-02 23:06:49.049724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.500 [2024-11-02 23:06:49.049871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.067 23:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:44.067 23:06:49 -- common/autotest_common.sh@852 -- # return 0 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:44.067 23:06:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.067 23:06:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.067 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.067 ************************************ 00:07:44.067 START TEST accel_assign_opcode 00:07:44.067 ************************************ 00:07:44.067 23:06:49 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:44.067 23:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.067 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.067 [2024-11-02 23:06:49.727878] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:44.067 23:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:44.067 23:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.067 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.067 [2024-11-02 23:06:49.735891] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:44.067 23:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.067 23:06:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:44.067 23:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.067 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.326 23:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.326 23:06:49 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:44.326 23:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.326 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.326 23:06:49 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:44.326 23:06:49 -- accel/accel_rpc.sh@42 -- # grep software 00:07:44.326 23:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.326 software 00:07:44.326 00:07:44.326 real 0m0.223s 00:07:44.326 user 0m0.037s 00:07:44.326 sys 0m0.005s 00:07:44.326 23:06:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.326 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.326 ************************************ 00:07:44.326 END TEST accel_assign_opcode 00:07:44.326 ************************************ 00:07:44.326 23:06:49 -- accel/accel_rpc.sh@55 -- # killprocess 469887 00:07:44.326 23:06:49 -- common/autotest_common.sh@926 -- # '[' -z 469887 ']' 00:07:44.326 23:06:49 -- common/autotest_common.sh@930 -- # kill -0 469887 00:07:44.326 23:06:49 -- common/autotest_common.sh@931 -- # uname 00:07:44.326 23:06:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:44.326 23:06:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 469887 00:07:44.326 23:06:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:44.326 23:06:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:44.326 23:06:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 469887' 00:07:44.326 killing process with pid 469887 00:07:44.326 23:06:50 -- common/autotest_common.sh@945 -- # kill 469887 00:07:44.326 23:06:50 -- common/autotest_common.sh@950 -- # wait 469887 00:07:44.895 00:07:44.895 real 0m1.623s 00:07:44.895 user 0m1.656s 00:07:44.895 sys 0m0.462s 00:07:44.895 23:06:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.895 23:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.895 ************************************ 00:07:44.895 END TEST accel_rpc 00:07:44.895 ************************************ 00:07:44.895 23:06:50 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.895 23:06:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.895 23:06:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.895 23:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.895 ************************************ 00:07:44.895 START TEST app_cmdline 00:07:44.895 ************************************ 00:07:44.895 23:06:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.895 * Looking for test storage... 00:07:44.895 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:44.895 23:06:50 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.895 23:06:50 -- app/cmdline.sh@17 -- # spdk_tgt_pid=470283 00:07:44.895 23:06:50 -- app/cmdline.sh@18 -- # waitforlisten 470283 00:07:44.895 23:06:50 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.895 23:06:50 -- common/autotest_common.sh@819 -- # '[' -z 470283 ']' 00:07:44.895 23:06:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.895 23:06:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:44.895 23:06:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.895 23:06:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:44.895 23:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.895 [2024-11-02 23:06:50.585601] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:44.895 [2024-11-02 23:06:50.585660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470283 ] 00:07:44.895 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.154 [2024-11-02 23:06:50.654297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.154 [2024-11-02 23:06:50.727621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.154 [2024-11-02 23:06:50.727741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.725 23:06:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:45.725 23:06:51 -- common/autotest_common.sh@852 -- # return 0 00:07:45.725 23:06:51 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:45.986 { 00:07:45.986 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:07:45.986 "fields": { 00:07:45.986 "major": 24, 00:07:45.986 "minor": 1, 00:07:45.986 "patch": 1, 00:07:45.986 "suffix": "-pre", 00:07:45.986 "commit": "726a04d70" 00:07:45.986 } 00:07:45.986 } 00:07:45.986 23:06:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:45.986 23:06:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:45.986 23:06:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:45.986 23:06:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:45.986 23:06:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:45.986 23:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:45.986 23:06:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:45.986 23:06:51 -- app/cmdline.sh@26 -- # sort 00:07:45.986 23:06:51 -- common/autotest_common.sh@10 -- # set +x 00:07:45.986 23:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:45.986 23:06:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:45.986 23:06:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:45.986 23:06:51 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.986 23:06:51 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.986 23:06:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.986 23:06:51 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:45.986 23:06:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.986 23:06:51 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:45.986 23:06:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.986 23:06:51 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:45.986 23:06:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.986 23:06:51 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:45.986 23:06:51 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.986 23:06:51 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.245 request: 00:07:46.245 { 00:07:46.245 "method": "env_dpdk_get_mem_stats", 00:07:46.245 "req_id": 1 00:07:46.245 } 00:07:46.245 Got JSON-RPC error response 00:07:46.245 response: 00:07:46.245 { 00:07:46.245 "code": -32601, 00:07:46.245 "message": "Method not found" 00:07:46.245 } 00:07:46.245 23:06:51 -- common/autotest_common.sh@643 -- # es=1 00:07:46.245 23:06:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.245 23:06:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:46.245 23:06:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.245 23:06:51 -- app/cmdline.sh@1 -- # killprocess 470283 00:07:46.245 23:06:51 -- common/autotest_common.sh@926 -- # '[' -z 470283 ']' 00:07:46.245 23:06:51 -- common/autotest_common.sh@930 -- # kill -0 470283 00:07:46.245 23:06:51 -- common/autotest_common.sh@931 -- # uname 00:07:46.245 23:06:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:46.245 23:06:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 470283 00:07:46.245 23:06:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:46.245 23:06:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:46.245 23:06:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 470283' 00:07:46.245 killing process with pid 470283 00:07:46.245 23:06:51 -- common/autotest_common.sh@945 -- # kill 470283 00:07:46.245 23:06:51 -- common/autotest_common.sh@950 -- # wait 470283 00:07:46.504 00:07:46.504 real 0m1.749s 00:07:46.504 user 0m2.075s 00:07:46.504 sys 0m0.474s 00:07:46.504 23:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.504 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:46.504 ************************************ 00:07:46.504 END TEST app_cmdline 00:07:46.505 ************************************ 00:07:46.505 23:06:52 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:46.505 23:06:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.505 23:06:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.505 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:46.505 ************************************ 00:07:46.505 START TEST version 00:07:46.505 ************************************ 00:07:46.505 23:06:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:46.764 * Looking for test storage... 00:07:46.764 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:46.764 23:06:52 -- app/version.sh@17 -- # get_header_version major 00:07:46.764 23:06:52 -- app/version.sh@14 -- # cut -f2 00:07:46.764 23:06:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:46.764 23:06:52 -- app/version.sh@14 -- # tr -d '"' 00:07:46.764 23:06:52 -- app/version.sh@17 -- # major=24 00:07:46.764 23:06:52 -- app/version.sh@18 -- # get_header_version minor 00:07:46.764 23:06:52 -- app/version.sh@14 -- # tr -d '"' 00:07:46.764 23:06:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:46.764 23:06:52 -- app/version.sh@14 -- # cut -f2 00:07:46.764 23:06:52 -- app/version.sh@18 -- # minor=1 00:07:46.764 23:06:52 -- app/version.sh@19 -- # get_header_version patch 00:07:46.764 23:06:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:46.764 23:06:52 -- app/version.sh@14 -- # cut -f2 00:07:46.764 23:06:52 -- app/version.sh@14 -- # tr -d '"' 00:07:46.764 23:06:52 -- app/version.sh@19 -- # patch=1 00:07:46.764 23:06:52 -- app/version.sh@20 -- # get_header_version suffix 00:07:46.764 23:06:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:46.764 23:06:52 -- app/version.sh@14 -- # cut -f2 00:07:46.764 23:06:52 -- app/version.sh@14 -- # tr -d '"' 00:07:46.764 23:06:52 -- app/version.sh@20 -- # suffix=-pre 00:07:46.764 23:06:52 -- app/version.sh@22 -- # version=24.1 00:07:46.764 23:06:52 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:46.764 23:06:52 -- app/version.sh@25 -- # version=24.1.1 00:07:46.764 23:06:52 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:46.764 23:06:52 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:46.764 23:06:52 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:46.764 23:06:52 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:46.764 23:06:52 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:46.764 00:07:46.764 real 0m0.177s 00:07:46.764 user 0m0.087s 00:07:46.764 sys 0m0.127s 00:07:46.764 23:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.764 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:46.764 ************************************ 00:07:46.764 END TEST version 00:07:46.764 ************************************ 00:07:46.764 23:06:52 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:46.764 23:06:52 -- spdk/autotest.sh@204 -- # uname -s 00:07:46.764 23:06:52 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:46.764 23:06:52 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:46.764 23:06:52 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:46.764 23:06:52 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:46.764 23:06:52 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:46.764 23:06:52 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:46.764 23:06:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:46.764 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:46.764 23:06:52 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:46.764 23:06:52 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:46.764 23:06:52 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:46.764 23:06:52 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:46.764 23:06:52 -- spdk/autotest.sh@291 -- # '[' rdma = rdma ']' 00:07:46.764 23:06:52 -- spdk/autotest.sh@292 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:46.764 23:06:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:46.764 23:06:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.764 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:46.764 ************************************ 00:07:46.764 START TEST nvmf_rdma 00:07:46.764 ************************************ 00:07:46.764 23:06:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:47.024 * Looking for test storage... 00:07:47.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.024 23:06:52 -- nvmf/common.sh@7 -- # uname -s 00:07:47.024 23:06:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.024 23:06:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.024 23:06:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.024 23:06:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.024 23:06:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.024 23:06:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.024 23:06:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.024 23:06:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.024 23:06:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.024 23:06:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.024 23:06:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:47.024 23:06:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:47.024 23:06:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.024 23:06:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.024 23:06:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.024 23:06:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:47.024 23:06:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.024 23:06:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.024 23:06:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.024 23:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.024 23:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.024 23:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.024 23:06:52 -- paths/export.sh@5 -- # export PATH 00:07:47.024 23:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.024 23:06:52 -- nvmf/common.sh@46 -- # : 0 00:07:47.024 23:06:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:47.024 23:06:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:47.024 23:06:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:47.024 23:06:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.024 23:06:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.024 23:06:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:47.024 23:06:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:47.024 23:06:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:47.024 23:06:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:47.024 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:47.024 23:06:52 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:47.024 23:06:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:47.024 23:06:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.024 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.024 ************************************ 00:07:47.024 START TEST nvmf_example 00:07:47.024 ************************************ 00:07:47.024 23:06:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:47.024 * Looking for test storage... 00:07:47.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:47.024 23:06:52 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.024 23:06:52 -- nvmf/common.sh@7 -- # uname -s 00:07:47.024 23:06:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.024 23:06:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.024 23:06:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.024 23:06:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.024 23:06:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.024 23:06:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.024 23:06:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.024 23:06:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.024 23:06:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.024 23:06:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.024 23:06:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:47.024 23:06:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:47.024 23:06:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.024 23:06:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.024 23:06:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.024 23:06:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:47.025 23:06:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.025 23:06:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.025 23:06:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.025 23:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.025 23:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.025 23:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.025 23:06:52 -- paths/export.sh@5 -- # export PATH 00:07:47.025 23:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.025 23:06:52 -- nvmf/common.sh@46 -- # : 0 00:07:47.025 23:06:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:47.025 23:06:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:47.025 23:06:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:47.025 23:06:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.025 23:06:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.025 23:06:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:47.025 23:06:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:47.025 23:06:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:47.025 23:06:52 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:47.025 23:06:52 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:47.025 23:06:52 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:47.025 23:06:52 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:47.025 23:06:52 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:47.025 23:06:52 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:47.025 23:06:52 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:47.025 23:06:52 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:47.025 23:06:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:47.025 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.025 23:06:52 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:47.025 23:06:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:47.025 23:06:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.025 23:06:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:47.025 23:06:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:47.025 23:06:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:47.025 23:06:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.025 23:06:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.025 23:06:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.025 23:06:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:47.025 23:06:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:47.025 23:06:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:47.025 23:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:55.153 23:06:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:55.153 23:06:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:55.153 23:06:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:55.153 23:06:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:55.153 23:06:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:55.153 23:06:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:55.153 23:06:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:55.153 23:06:59 -- nvmf/common.sh@294 -- # net_devs=() 00:07:55.153 23:06:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:55.153 23:06:59 -- nvmf/common.sh@295 -- # e810=() 00:07:55.153 23:06:59 -- nvmf/common.sh@295 -- # local -ga e810 00:07:55.153 23:06:59 -- nvmf/common.sh@296 -- # x722=() 00:07:55.153 23:06:59 -- nvmf/common.sh@296 -- # local -ga x722 00:07:55.153 23:06:59 -- nvmf/common.sh@297 -- # mlx=() 00:07:55.153 23:06:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:55.153 23:06:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.153 23:06:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:55.153 23:06:59 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:55.153 23:06:59 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:55.153 23:06:59 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:55.153 23:06:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:55.153 23:06:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:55.153 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:55.153 23:06:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.153 23:06:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:55.153 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:55.153 23:06:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.153 23:06:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:55.153 23:06:59 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.153 23:06:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:55.153 23:06:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.153 23:06:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:55.153 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:55.153 23:06:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.153 23:06:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.153 23:06:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:55.153 23:06:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.153 23:06:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:55.153 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:55.153 23:06:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.153 23:06:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:55.153 23:06:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:55.153 23:06:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:55.153 23:06:59 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:55.153 23:06:59 -- nvmf/common.sh@57 -- # uname 00:07:55.153 23:06:59 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:55.153 23:06:59 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:55.153 23:06:59 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:55.153 23:06:59 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:55.153 23:06:59 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:55.153 23:06:59 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:55.153 23:06:59 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:55.153 23:06:59 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:55.153 23:06:59 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:55.153 23:06:59 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:55.153 23:06:59 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:55.153 23:06:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.153 23:06:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:55.153 23:06:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:55.153 23:06:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.153 23:06:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:55.153 23:06:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:55.153 23:06:59 -- nvmf/common.sh@104 -- # continue 2 00:07:55.153 23:06:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:55.153 23:06:59 -- nvmf/common.sh@104 -- # continue 2 00:07:55.153 23:06:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:55.153 23:06:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:55.153 23:06:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:55.153 23:06:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:55.153 23:06:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:55.153 23:06:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:55.153 23:06:59 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:55.153 23:06:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:55.153 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.153 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:55.153 altname enp217s0f0np0 00:07:55.153 altname ens818f0np0 00:07:55.153 inet 192.168.100.8/24 scope global mlx_0_0 00:07:55.153 valid_lft forever preferred_lft forever 00:07:55.153 23:06:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:55.153 23:06:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:55.153 23:06:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:55.153 23:06:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:55.153 23:06:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:55.153 23:06:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:55.153 23:06:59 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:55.153 23:06:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:55.153 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.153 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:55.153 altname enp217s0f1np1 00:07:55.153 altname ens818f1np1 00:07:55.153 inet 192.168.100.9/24 scope global mlx_0_1 00:07:55.153 valid_lft forever preferred_lft forever 00:07:55.153 23:06:59 -- nvmf/common.sh@410 -- # return 0 00:07:55.153 23:06:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:55.153 23:06:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:55.153 23:06:59 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:55.153 23:06:59 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:55.153 23:06:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.153 23:06:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:55.153 23:06:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:55.153 23:06:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.153 23:06:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:55.153 23:06:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.153 23:06:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.153 23:06:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:55.153 23:06:59 -- nvmf/common.sh@104 -- # continue 2 00:07:55.154 23:06:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:55.154 23:06:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.154 23:06:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.154 23:06:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.154 23:06:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.154 23:06:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:55.154 23:06:59 -- nvmf/common.sh@104 -- # continue 2 00:07:55.154 23:06:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:55.154 23:06:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:55.154 23:06:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:55.154 23:06:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:55.154 23:06:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:55.154 23:06:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:55.154 23:06:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:55.154 23:06:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:55.154 23:06:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:55.154 23:06:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:55.154 23:06:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:55.154 23:06:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:55.154 23:06:59 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:55.154 192.168.100.9' 00:07:55.154 23:06:59 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:55.154 192.168.100.9' 00:07:55.154 23:06:59 -- nvmf/common.sh@445 -- # head -n 1 00:07:55.154 23:06:59 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:55.154 23:06:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:55.154 192.168.100.9' 00:07:55.154 23:06:59 -- nvmf/common.sh@446 -- # tail -n +2 00:07:55.154 23:06:59 -- nvmf/common.sh@446 -- # head -n 1 00:07:55.154 23:06:59 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:55.154 23:06:59 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:55.154 23:06:59 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:55.154 23:06:59 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:55.154 23:06:59 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:55.154 23:06:59 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:55.154 23:06:59 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:55.154 23:06:59 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:55.154 23:06:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:55.154 23:06:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.154 23:06:59 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:55.154 23:06:59 -- target/nvmf_example.sh@34 -- # nvmfpid=474117 00:07:55.154 23:06:59 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:55.154 23:06:59 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.154 23:06:59 -- target/nvmf_example.sh@36 -- # waitforlisten 474117 00:07:55.154 23:06:59 -- common/autotest_common.sh@819 -- # '[' -z 474117 ']' 00:07:55.154 23:06:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.154 23:06:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:55.154 23:06:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.154 23:06:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:55.154 23:06:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.154 23:07:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.154 23:07:00 -- common/autotest_common.sh@852 -- # return 0 00:07:55.154 23:07:00 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:55.154 23:07:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:55.154 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.154 23:07:00 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:55.154 23:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.154 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.413 23:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.413 23:07:00 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:55.413 23:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.413 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.413 23:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.413 23:07:00 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:55.413 23:07:00 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.413 23:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.413 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.413 23:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.413 23:07:00 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:55.413 23:07:00 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.413 23:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.413 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.413 23:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.413 23:07:00 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:55.413 23:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.413 23:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.413 23:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.413 23:07:00 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:55.413 23:07:00 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:55.413 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.803 Initializing NVMe Controllers 00:08:07.803 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.803 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:07.803 Initialization complete. Launching workers. 00:08:07.803 ======================================================== 00:08:07.803 Latency(us) 00:08:07.803 Device Information : IOPS MiB/s Average min max 00:08:07.803 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26604.40 103.92 2405.84 595.62 13783.33 00:08:07.803 ======================================================== 00:08:07.803 Total : 26604.40 103.92 2405.84 595.62 13783.33 00:08:07.803 00:08:07.803 23:07:12 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:07.803 23:07:12 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:07.803 23:07:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:07.803 23:07:12 -- nvmf/common.sh@116 -- # sync 00:08:07.803 23:07:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:07.803 23:07:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:07.803 23:07:12 -- nvmf/common.sh@119 -- # set +e 00:08:07.803 23:07:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:07.803 23:07:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:07.803 rmmod nvme_rdma 00:08:07.803 rmmod nvme_fabrics 00:08:07.803 23:07:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:07.803 23:07:12 -- nvmf/common.sh@123 -- # set -e 00:08:07.803 23:07:12 -- nvmf/common.sh@124 -- # return 0 00:08:07.803 23:07:12 -- nvmf/common.sh@477 -- # '[' -n 474117 ']' 00:08:07.803 23:07:12 -- nvmf/common.sh@478 -- # killprocess 474117 00:08:07.803 23:07:12 -- common/autotest_common.sh@926 -- # '[' -z 474117 ']' 00:08:07.803 23:07:12 -- common/autotest_common.sh@930 -- # kill -0 474117 00:08:07.803 23:07:12 -- common/autotest_common.sh@931 -- # uname 00:08:07.803 23:07:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:07.803 23:07:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 474117 00:08:07.803 23:07:12 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:07.803 23:07:12 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:07.803 23:07:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 474117' 00:08:07.803 killing process with pid 474117 00:08:07.803 23:07:12 -- common/autotest_common.sh@945 -- # kill 474117 00:08:07.803 23:07:12 -- common/autotest_common.sh@950 -- # wait 474117 00:08:07.803 nvmf threads initialize successfully 00:08:07.803 bdev subsystem init successfully 00:08:07.803 created a nvmf target service 00:08:07.803 create targets's poll groups done 00:08:07.803 all subsystems of target started 00:08:07.803 nvmf target is running 00:08:07.803 all subsystems of target stopped 00:08:07.803 destroy targets's poll groups done 00:08:07.803 destroyed the nvmf target service 00:08:07.803 bdev subsystem finish successfully 00:08:07.803 nvmf threads destroy successfully 00:08:07.803 23:07:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.803 23:07:12 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:07.803 23:07:12 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:07.803 23:07:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:07.803 23:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.803 00:08:07.803 real 0m19.997s 00:08:07.803 user 0m52.380s 00:08:07.803 sys 0m5.834s 00:08:07.803 23:07:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.803 23:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.803 ************************************ 00:08:07.803 END TEST nvmf_example 00:08:07.803 ************************************ 00:08:07.803 23:07:12 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:07.803 23:07:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:07.803 23:07:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.803 23:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.803 ************************************ 00:08:07.803 START TEST nvmf_filesystem 00:08:07.803 ************************************ 00:08:07.803 23:07:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:07.803 * Looking for test storage... 00:08:07.803 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.803 23:07:12 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:07.803 23:07:12 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:07.803 23:07:12 -- common/autotest_common.sh@34 -- # set -e 00:08:07.803 23:07:12 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:07.803 23:07:12 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:07.803 23:07:12 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:07.803 23:07:12 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:07.803 23:07:12 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:07.803 23:07:12 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:07.803 23:07:12 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:07.803 23:07:12 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:07.803 23:07:12 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:07.803 23:07:12 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:07.803 23:07:12 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:07.803 23:07:12 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:07.803 23:07:12 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:07.803 23:07:12 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:07.803 23:07:12 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:07.803 23:07:12 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:07.803 23:07:12 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:07.803 23:07:12 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:07.803 23:07:12 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:07.803 23:07:12 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:07.803 23:07:12 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:07.803 23:07:12 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:07.803 23:07:12 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:07.803 23:07:12 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:07.803 23:07:12 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:07.803 23:07:12 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:07.803 23:07:12 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:07.803 23:07:12 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:07.803 23:07:12 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:07.803 23:07:12 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:07.803 23:07:12 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:07.803 23:07:12 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:07.803 23:07:12 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:07.803 23:07:12 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:07.803 23:07:12 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:07.803 23:07:12 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:07.803 23:07:12 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:07.803 23:07:12 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:07.803 23:07:12 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:07.803 23:07:12 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:07.803 23:07:12 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:07.803 23:07:12 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:07.803 23:07:12 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:07.803 23:07:12 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:07.803 23:07:12 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:07.803 23:07:12 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:07.803 23:07:12 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:07.803 23:07:12 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:07.803 23:07:12 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:07.803 23:07:12 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:07.803 23:07:12 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:07.803 23:07:12 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:07.803 23:07:12 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:07.803 23:07:12 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:07.803 23:07:12 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:07.803 23:07:12 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:07.803 23:07:12 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:07.803 23:07:12 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:07.803 23:07:12 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:07.803 23:07:12 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:07.803 23:07:12 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:07.803 23:07:12 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:07.803 23:07:12 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:07.803 23:07:12 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:07.804 23:07:12 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:07.804 23:07:12 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:07.804 23:07:12 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:07.804 23:07:12 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:07.804 23:07:12 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:07.804 23:07:12 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:07.804 23:07:12 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:07.804 23:07:12 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:07.804 23:07:12 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:07.804 23:07:12 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:07.804 23:07:12 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:07.804 23:07:12 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:07.804 23:07:12 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:07.804 23:07:12 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:07.804 23:07:12 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:07.804 23:07:12 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:07.804 23:07:12 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:07.804 23:07:12 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:07.804 23:07:12 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:07.804 23:07:12 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:07.804 23:07:12 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:07.804 23:07:12 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:07.804 23:07:12 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:07.804 23:07:12 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:07.804 23:07:12 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:07.804 23:07:12 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:07.804 23:07:12 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:07.804 23:07:12 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:07.804 23:07:12 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:07.804 23:07:12 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:07.804 23:07:12 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:07.804 23:07:12 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:07.804 23:07:12 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:07.804 23:07:12 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:07.804 23:07:12 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:07.804 #define SPDK_CONFIG_H 00:08:07.804 #define SPDK_CONFIG_APPS 1 00:08:07.804 #define SPDK_CONFIG_ARCH native 00:08:07.804 #undef SPDK_CONFIG_ASAN 00:08:07.804 #undef SPDK_CONFIG_AVAHI 00:08:07.804 #undef SPDK_CONFIG_CET 00:08:07.804 #define SPDK_CONFIG_COVERAGE 1 00:08:07.804 #define SPDK_CONFIG_CROSS_PREFIX 00:08:07.804 #undef SPDK_CONFIG_CRYPTO 00:08:07.804 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:07.804 #undef SPDK_CONFIG_CUSTOMOCF 00:08:07.804 #undef SPDK_CONFIG_DAOS 00:08:07.804 #define SPDK_CONFIG_DAOS_DIR 00:08:07.804 #define SPDK_CONFIG_DEBUG 1 00:08:07.804 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:07.804 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:07.804 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:07.804 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:07.804 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:07.804 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:07.804 #define SPDK_CONFIG_EXAMPLES 1 00:08:07.804 #undef SPDK_CONFIG_FC 00:08:07.804 #define SPDK_CONFIG_FC_PATH 00:08:07.804 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:07.804 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:07.804 #undef SPDK_CONFIG_FUSE 00:08:07.804 #undef SPDK_CONFIG_FUZZER 00:08:07.804 #define SPDK_CONFIG_FUZZER_LIB 00:08:07.804 #undef SPDK_CONFIG_GOLANG 00:08:07.804 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:07.804 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:07.804 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:07.804 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:07.804 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:07.804 #define SPDK_CONFIG_IDXD 1 00:08:07.804 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:07.804 #undef SPDK_CONFIG_IPSEC_MB 00:08:07.804 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:07.804 #define SPDK_CONFIG_ISAL 1 00:08:07.804 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:07.804 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:07.804 #define SPDK_CONFIG_LIBDIR 00:08:07.804 #undef SPDK_CONFIG_LTO 00:08:07.804 #define SPDK_CONFIG_MAX_LCORES 00:08:07.804 #define SPDK_CONFIG_NVME_CUSE 1 00:08:07.804 #undef SPDK_CONFIG_OCF 00:08:07.804 #define SPDK_CONFIG_OCF_PATH 00:08:07.804 #define SPDK_CONFIG_OPENSSL_PATH 00:08:07.804 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:07.804 #undef SPDK_CONFIG_PGO_USE 00:08:07.804 #define SPDK_CONFIG_PREFIX /usr/local 00:08:07.804 #undef SPDK_CONFIG_RAID5F 00:08:07.804 #undef SPDK_CONFIG_RBD 00:08:07.804 #define SPDK_CONFIG_RDMA 1 00:08:07.804 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:07.804 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:07.804 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:07.804 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:07.804 #define SPDK_CONFIG_SHARED 1 00:08:07.804 #undef SPDK_CONFIG_SMA 00:08:07.804 #define SPDK_CONFIG_TESTS 1 00:08:07.804 #undef SPDK_CONFIG_TSAN 00:08:07.804 #define SPDK_CONFIG_UBLK 1 00:08:07.804 #define SPDK_CONFIG_UBSAN 1 00:08:07.804 #undef SPDK_CONFIG_UNIT_TESTS 00:08:07.804 #undef SPDK_CONFIG_URING 00:08:07.804 #define SPDK_CONFIG_URING_PATH 00:08:07.804 #undef SPDK_CONFIG_URING_ZNS 00:08:07.804 #undef SPDK_CONFIG_USDT 00:08:07.804 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:07.804 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:07.804 #undef SPDK_CONFIG_VFIO_USER 00:08:07.804 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:07.804 #define SPDK_CONFIG_VHOST 1 00:08:07.804 #define SPDK_CONFIG_VIRTIO 1 00:08:07.804 #undef SPDK_CONFIG_VTUNE 00:08:07.804 #define SPDK_CONFIG_VTUNE_DIR 00:08:07.804 #define SPDK_CONFIG_WERROR 1 00:08:07.804 #define SPDK_CONFIG_WPDK_DIR 00:08:07.804 #undef SPDK_CONFIG_XNVME 00:08:07.804 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:07.804 23:07:12 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:07.804 23:07:12 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:07.804 23:07:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.804 23:07:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.804 23:07:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.804 23:07:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.804 23:07:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.804 23:07:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.804 23:07:12 -- paths/export.sh@5 -- # export PATH 00:08:07.804 23:07:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.804 23:07:12 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:07.804 23:07:12 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:07.804 23:07:12 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:07.804 23:07:12 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:07.804 23:07:12 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:07.804 23:07:12 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:07.804 23:07:12 -- pm/common@16 -- # TEST_TAG=N/A 00:08:07.804 23:07:12 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:07.804 23:07:12 -- common/autotest_common.sh@52 -- # : 1 00:08:07.804 23:07:12 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:07.804 23:07:12 -- common/autotest_common.sh@56 -- # : 0 00:08:07.804 23:07:12 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:07.804 23:07:12 -- common/autotest_common.sh@58 -- # : 0 00:08:07.804 23:07:12 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:07.804 23:07:12 -- common/autotest_common.sh@60 -- # : 1 00:08:07.804 23:07:12 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:07.804 23:07:12 -- common/autotest_common.sh@62 -- # : 0 00:08:07.804 23:07:12 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:07.804 23:07:12 -- common/autotest_common.sh@64 -- # : 00:08:07.804 23:07:12 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:07.804 23:07:12 -- common/autotest_common.sh@66 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:07.805 23:07:12 -- common/autotest_common.sh@68 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:07.805 23:07:12 -- common/autotest_common.sh@70 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:07.805 23:07:12 -- common/autotest_common.sh@72 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:07.805 23:07:12 -- common/autotest_common.sh@74 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:07.805 23:07:12 -- common/autotest_common.sh@76 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:07.805 23:07:12 -- common/autotest_common.sh@78 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:07.805 23:07:12 -- common/autotest_common.sh@80 -- # : 1 00:08:07.805 23:07:12 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:07.805 23:07:12 -- common/autotest_common.sh@82 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:07.805 23:07:12 -- common/autotest_common.sh@84 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:07.805 23:07:12 -- common/autotest_common.sh@86 -- # : 1 00:08:07.805 23:07:12 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:07.805 23:07:12 -- common/autotest_common.sh@88 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:07.805 23:07:12 -- common/autotest_common.sh@90 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:07.805 23:07:12 -- common/autotest_common.sh@92 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:07.805 23:07:12 -- common/autotest_common.sh@94 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:07.805 23:07:12 -- common/autotest_common.sh@96 -- # : rdma 00:08:07.805 23:07:12 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:07.805 23:07:12 -- common/autotest_common.sh@98 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:07.805 23:07:12 -- common/autotest_common.sh@100 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:07.805 23:07:12 -- common/autotest_common.sh@102 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:07.805 23:07:12 -- common/autotest_common.sh@104 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:07.805 23:07:12 -- common/autotest_common.sh@106 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:07.805 23:07:12 -- common/autotest_common.sh@108 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:07.805 23:07:12 -- common/autotest_common.sh@110 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:07.805 23:07:12 -- common/autotest_common.sh@112 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:07.805 23:07:12 -- common/autotest_common.sh@114 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:07.805 23:07:12 -- common/autotest_common.sh@116 -- # : 1 00:08:07.805 23:07:12 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:07.805 23:07:12 -- common/autotest_common.sh@118 -- # : 00:08:07.805 23:07:12 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:07.805 23:07:12 -- common/autotest_common.sh@120 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:07.805 23:07:12 -- common/autotest_common.sh@122 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:07.805 23:07:12 -- common/autotest_common.sh@124 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:07.805 23:07:12 -- common/autotest_common.sh@126 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:07.805 23:07:12 -- common/autotest_common.sh@128 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:07.805 23:07:12 -- common/autotest_common.sh@130 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:07.805 23:07:12 -- common/autotest_common.sh@132 -- # : 00:08:07.805 23:07:12 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:07.805 23:07:12 -- common/autotest_common.sh@134 -- # : true 00:08:07.805 23:07:12 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:07.805 23:07:12 -- common/autotest_common.sh@136 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:07.805 23:07:12 -- common/autotest_common.sh@138 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:07.805 23:07:12 -- common/autotest_common.sh@140 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:07.805 23:07:12 -- common/autotest_common.sh@142 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:07.805 23:07:12 -- common/autotest_common.sh@144 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:07.805 23:07:12 -- common/autotest_common.sh@146 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:07.805 23:07:12 -- common/autotest_common.sh@148 -- # : mlx5 00:08:07.805 23:07:12 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:07.805 23:07:12 -- common/autotest_common.sh@150 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:07.805 23:07:12 -- common/autotest_common.sh@152 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:07.805 23:07:12 -- common/autotest_common.sh@154 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:07.805 23:07:12 -- common/autotest_common.sh@156 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:07.805 23:07:12 -- common/autotest_common.sh@158 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:07.805 23:07:12 -- common/autotest_common.sh@160 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:07.805 23:07:12 -- common/autotest_common.sh@163 -- # : 00:08:07.805 23:07:12 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:07.805 23:07:12 -- common/autotest_common.sh@165 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:07.805 23:07:12 -- common/autotest_common.sh@167 -- # : 0 00:08:07.805 23:07:12 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:07.805 23:07:12 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.805 23:07:12 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.805 23:07:12 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.805 23:07:12 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:07.805 23:07:12 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:07.805 23:07:12 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:07.805 23:07:12 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:07.805 23:07:12 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.805 23:07:12 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.805 23:07:12 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.805 23:07:12 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.805 23:07:12 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:07.805 23:07:12 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:07.805 23:07:12 -- common/autotest_common.sh@196 -- # cat 00:08:07.806 23:07:12 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:07.806 23:07:12 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.806 23:07:12 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.806 23:07:12 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.806 23:07:12 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.806 23:07:12 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:07.806 23:07:12 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:07.806 23:07:12 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:07.806 23:07:12 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:07.806 23:07:12 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:07.806 23:07:12 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:07.806 23:07:12 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.806 23:07:12 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.806 23:07:12 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.806 23:07:12 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.806 23:07:12 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:07.806 23:07:12 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:07.806 23:07:12 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.806 23:07:12 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.806 23:07:12 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:07.806 23:07:12 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:07.806 23:07:12 -- common/autotest_common.sh@249 -- # valgrind= 00:08:07.806 23:07:12 -- common/autotest_common.sh@255 -- # uname -s 00:08:07.806 23:07:12 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:07.806 23:07:12 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:07.806 23:07:12 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:07.806 23:07:12 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:07.806 23:07:12 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:07.806 23:07:12 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:08:07.806 23:07:12 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:07.806 23:07:12 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:07.806 23:07:12 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:07.806 23:07:12 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:07.806 23:07:12 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:07.806 23:07:12 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:07.806 23:07:12 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:07.806 23:07:12 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=rdma 00:08:07.806 23:07:12 -- common/autotest_common.sh@309 -- # [[ -z 476544 ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@309 -- # kill -0 476544 00:08:07.806 23:07:12 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:07.806 23:07:12 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:07.806 23:07:12 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:07.806 23:07:12 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:07.806 23:07:12 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:07.806 23:07:12 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:07.806 23:07:12 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:07.806 23:07:12 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.3LYQcI 00:08:07.806 23:07:12 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:07.806 23:07:12 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3LYQcI/tests/target /tmp/spdk.3LYQcI 00:08:07.806 23:07:12 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@318 -- # df -T 00:08:07.806 23:07:12 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:07.806 23:07:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=4096 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:07.806 23:07:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=5284425728 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=55079686144 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61730615296 00:08:07.806 23:07:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=6650929152 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=30816886784 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30865305600 00:08:07.806 23:07:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=48418816 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=12336685056 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12346126336 00:08:07.806 23:07:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=9441280 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=30864367616 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30865309696 00:08:07.806 23:07:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=942080 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=6173048832 00:08:07.806 23:07:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6173061120 00:08:07.806 23:07:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:08:07.806 23:07:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.806 23:07:12 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:07.806 * Looking for test storage... 00:08:07.806 23:07:12 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:07.806 23:07:12 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:07.806 23:07:12 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.806 23:07:12 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:07.806 23:07:12 -- common/autotest_common.sh@363 -- # mount=/ 00:08:07.806 23:07:12 -- common/autotest_common.sh@365 -- # target_space=55079686144 00:08:07.806 23:07:12 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:07.806 23:07:12 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:07.806 23:07:12 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@372 -- # new_size=8865521664 00:08:07.806 23:07:12 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:07.806 23:07:12 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.806 23:07:12 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.806 23:07:12 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.806 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.806 23:07:12 -- common/autotest_common.sh@380 -- # return 0 00:08:07.806 23:07:12 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:07.806 23:07:12 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:07.806 23:07:12 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:07.806 23:07:12 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:07.806 23:07:12 -- common/autotest_common.sh@1672 -- # true 00:08:07.806 23:07:12 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:07.806 23:07:12 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:07.806 23:07:12 -- common/autotest_common.sh@27 -- # exec 00:08:07.806 23:07:12 -- common/autotest_common.sh@29 -- # exec 00:08:07.806 23:07:12 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:07.806 23:07:12 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:07.807 23:07:12 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:07.807 23:07:12 -- common/autotest_common.sh@18 -- # set -x 00:08:07.807 23:07:12 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.807 23:07:12 -- nvmf/common.sh@7 -- # uname -s 00:08:07.807 23:07:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.807 23:07:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.807 23:07:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.807 23:07:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.807 23:07:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.807 23:07:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.807 23:07:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.807 23:07:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.807 23:07:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.807 23:07:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.807 23:07:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:07.807 23:07:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:07.807 23:07:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.807 23:07:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.807 23:07:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.807 23:07:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:07.807 23:07:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.807 23:07:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.807 23:07:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.807 23:07:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.807 23:07:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.807 23:07:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.807 23:07:12 -- paths/export.sh@5 -- # export PATH 00:08:07.807 23:07:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.807 23:07:12 -- nvmf/common.sh@46 -- # : 0 00:08:07.807 23:07:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.807 23:07:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.807 23:07:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.807 23:07:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.807 23:07:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.807 23:07:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.807 23:07:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.807 23:07:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.807 23:07:12 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:07.807 23:07:12 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:07.807 23:07:12 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:07.807 23:07:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:07.807 23:07:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.807 23:07:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.807 23:07:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.807 23:07:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.807 23:07:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.807 23:07:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.807 23:07:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.807 23:07:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:07.807 23:07:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:07.807 23:07:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:07.807 23:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:14.377 23:07:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:14.377 23:07:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:14.377 23:07:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:14.377 23:07:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:14.377 23:07:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:14.377 23:07:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:14.377 23:07:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:14.377 23:07:19 -- nvmf/common.sh@294 -- # net_devs=() 00:08:14.377 23:07:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:14.377 23:07:19 -- nvmf/common.sh@295 -- # e810=() 00:08:14.377 23:07:19 -- nvmf/common.sh@295 -- # local -ga e810 00:08:14.377 23:07:19 -- nvmf/common.sh@296 -- # x722=() 00:08:14.378 23:07:19 -- nvmf/common.sh@296 -- # local -ga x722 00:08:14.378 23:07:19 -- nvmf/common.sh@297 -- # mlx=() 00:08:14.378 23:07:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:14.378 23:07:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.378 23:07:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:14.378 23:07:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:14.378 23:07:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:14.378 23:07:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:14.378 23:07:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:14.378 23:07:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:14.378 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:14.378 23:07:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:14.378 23:07:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:14.378 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:14.378 23:07:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:14.378 23:07:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:14.378 23:07:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.378 23:07:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:14.378 23:07:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.378 23:07:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:14.378 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.378 23:07:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.378 23:07:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:14.378 23:07:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.378 23:07:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:14.378 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.378 23:07:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:14.378 23:07:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:14.378 23:07:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:14.378 23:07:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:14.378 23:07:19 -- nvmf/common.sh@57 -- # uname 00:08:14.378 23:07:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:14.378 23:07:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:14.378 23:07:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:14.378 23:07:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:14.378 23:07:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:14.378 23:07:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:14.378 23:07:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:14.378 23:07:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:14.378 23:07:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:14.378 23:07:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:14.378 23:07:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:14.378 23:07:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:14.378 23:07:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:14.378 23:07:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:14.378 23:07:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:14.378 23:07:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:14.378 23:07:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@104 -- # continue 2 00:08:14.378 23:07:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@104 -- # continue 2 00:08:14.378 23:07:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:14.378 23:07:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:14.378 23:07:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:14.378 23:07:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:14.378 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:14.378 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:14.378 altname enp217s0f0np0 00:08:14.378 altname ens818f0np0 00:08:14.378 inet 192.168.100.8/24 scope global mlx_0_0 00:08:14.378 valid_lft forever preferred_lft forever 00:08:14.378 23:07:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:14.378 23:07:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:14.378 23:07:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:14.378 23:07:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:14.378 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:14.378 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:14.378 altname enp217s0f1np1 00:08:14.378 altname ens818f1np1 00:08:14.378 inet 192.168.100.9/24 scope global mlx_0_1 00:08:14.378 valid_lft forever preferred_lft forever 00:08:14.378 23:07:19 -- nvmf/common.sh@410 -- # return 0 00:08:14.378 23:07:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:14.378 23:07:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:14.378 23:07:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:14.378 23:07:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:14.378 23:07:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:14.378 23:07:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:14.378 23:07:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:14.378 23:07:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:14.378 23:07:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:14.378 23:07:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@104 -- # continue 2 00:08:14.378 23:07:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.378 23:07:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:14.378 23:07:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@104 -- # continue 2 00:08:14.378 23:07:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:14.378 23:07:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:14.378 23:07:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:14.378 23:07:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:14.378 23:07:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:14.379 23:07:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:14.379 23:07:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:14.379 192.168.100.9' 00:08:14.379 23:07:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:14.379 192.168.100.9' 00:08:14.379 23:07:19 -- nvmf/common.sh@445 -- # head -n 1 00:08:14.379 23:07:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:14.379 23:07:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:14.379 192.168.100.9' 00:08:14.379 23:07:19 -- nvmf/common.sh@446 -- # tail -n +2 00:08:14.379 23:07:19 -- nvmf/common.sh@446 -- # head -n 1 00:08:14.379 23:07:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:14.379 23:07:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:14.379 23:07:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:14.379 23:07:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:14.379 23:07:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:14.379 23:07:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:14.379 23:07:19 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:14.379 23:07:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:14.379 23:07:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.379 23:07:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.379 ************************************ 00:08:14.379 START TEST nvmf_filesystem_no_in_capsule 00:08:14.379 ************************************ 00:08:14.379 23:07:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:14.379 23:07:19 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:14.379 23:07:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:14.379 23:07:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:14.379 23:07:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.379 23:07:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.379 23:07:19 -- nvmf/common.sh@469 -- # nvmfpid=479706 00:08:14.379 23:07:19 -- nvmf/common.sh@470 -- # waitforlisten 479706 00:08:14.379 23:07:19 -- common/autotest_common.sh@819 -- # '[' -z 479706 ']' 00:08:14.379 23:07:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.379 23:07:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:14.379 23:07:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.379 23:07:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:14.379 23:07:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.379 23:07:19 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.379 [2024-11-02 23:07:19.350642] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:14.379 [2024-11-02 23:07:19.350696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.379 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.379 [2024-11-02 23:07:19.419689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.379 [2024-11-02 23:07:19.494722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.379 [2024-11-02 23:07:19.494849] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.379 [2024-11-02 23:07:19.494859] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.379 [2024-11-02 23:07:19.494870] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.379 [2024-11-02 23:07:19.494915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.379 [2024-11-02 23:07:19.495013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.379 [2024-11-02 23:07:19.495036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.379 [2024-11-02 23:07:19.495038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.638 23:07:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.638 23:07:20 -- common/autotest_common.sh@852 -- # return 0 00:08:14.638 23:07:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:14.638 23:07:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.638 23:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.638 23:07:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.638 23:07:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:14.638 23:07:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:14.638 23:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.638 23:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.638 [2024-11-02 23:07:20.233404] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:14.638 [2024-11-02 23:07:20.254295] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22b7090/0x22bb580) succeed. 00:08:14.638 [2024-11-02 23:07:20.263358] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22b8680/0x22fcc20) succeed. 00:08:14.638 23:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.638 23:07:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:14.638 23:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.638 23:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.897 Malloc1 00:08:14.897 23:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.897 23:07:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:14.897 23:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.897 23:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.897 23:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.897 23:07:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.897 23:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.897 23:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.897 23:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.897 23:07:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:14.898 23:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.898 23:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.898 [2024-11-02 23:07:20.510919] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:14.898 23:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.898 23:07:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:14.898 23:07:20 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:14.898 23:07:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:14.898 23:07:20 -- common/autotest_common.sh@1359 -- # local bs 00:08:14.898 23:07:20 -- common/autotest_common.sh@1360 -- # local nb 00:08:14.898 23:07:20 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:14.898 23:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.898 23:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.898 23:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.898 23:07:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:14.898 { 00:08:14.898 "name": "Malloc1", 00:08:14.898 "aliases": [ 00:08:14.898 "c66c1f45-7e7d-4f2b-8eaa-742d5a1db34c" 00:08:14.898 ], 00:08:14.898 "product_name": "Malloc disk", 00:08:14.898 "block_size": 512, 00:08:14.898 "num_blocks": 1048576, 00:08:14.898 "uuid": "c66c1f45-7e7d-4f2b-8eaa-742d5a1db34c", 00:08:14.898 "assigned_rate_limits": { 00:08:14.898 "rw_ios_per_sec": 0, 00:08:14.898 "rw_mbytes_per_sec": 0, 00:08:14.898 "r_mbytes_per_sec": 0, 00:08:14.898 "w_mbytes_per_sec": 0 00:08:14.898 }, 00:08:14.898 "claimed": true, 00:08:14.898 "claim_type": "exclusive_write", 00:08:14.898 "zoned": false, 00:08:14.898 "supported_io_types": { 00:08:14.898 "read": true, 00:08:14.898 "write": true, 00:08:14.898 "unmap": true, 00:08:14.898 "write_zeroes": true, 00:08:14.898 "flush": true, 00:08:14.898 "reset": true, 00:08:14.898 "compare": false, 00:08:14.898 "compare_and_write": false, 00:08:14.898 "abort": true, 00:08:14.898 "nvme_admin": false, 00:08:14.898 "nvme_io": false 00:08:14.898 }, 00:08:14.898 "memory_domains": [ 00:08:14.898 { 00:08:14.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.898 "dma_device_type": 2 00:08:14.898 } 00:08:14.898 ], 00:08:14.898 "driver_specific": {} 00:08:14.898 } 00:08:14.898 ]' 00:08:14.898 23:07:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:14.898 23:07:20 -- common/autotest_common.sh@1362 -- # bs=512 00:08:14.898 23:07:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:14.898 23:07:20 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:14.898 23:07:20 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:14.898 23:07:20 -- common/autotest_common.sh@1367 -- # echo 512 00:08:14.898 23:07:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:14.898 23:07:20 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:16.275 23:07:21 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.275 23:07:21 -- common/autotest_common.sh@1177 -- # local i=0 00:08:16.275 23:07:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.275 23:07:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:16.275 23:07:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:18.183 23:07:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:18.183 23:07:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:18.183 23:07:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:18.183 23:07:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:18.183 23:07:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:18.183 23:07:23 -- common/autotest_common.sh@1187 -- # return 0 00:08:18.183 23:07:23 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:18.183 23:07:23 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:18.183 23:07:23 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:18.183 23:07:23 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:18.183 23:07:23 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:18.183 23:07:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:18.183 23:07:23 -- setup/common.sh@80 -- # echo 536870912 00:08:18.183 23:07:23 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:18.183 23:07:23 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:18.183 23:07:23 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:18.183 23:07:23 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:18.183 23:07:23 -- target/filesystem.sh@69 -- # partprobe 00:08:18.183 23:07:23 -- target/filesystem.sh@70 -- # sleep 1 00:08:19.121 23:07:24 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:19.121 23:07:24 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:19.121 23:07:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.121 23:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.121 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:08:19.121 ************************************ 00:08:19.121 START TEST filesystem_ext4 00:08:19.121 ************************************ 00:08:19.121 23:07:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:19.121 23:07:24 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:19.121 23:07:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.121 23:07:24 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:19.121 23:07:24 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:19.121 23:07:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.121 23:07:24 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.121 23:07:24 -- common/autotest_common.sh@905 -- # local force 00:08:19.121 23:07:24 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:19.121 23:07:24 -- common/autotest_common.sh@908 -- # force=-F 00:08:19.121 23:07:24 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:19.121 mke2fs 1.47.0 (5-Feb-2023) 00:08:19.381 Discarding device blocks: 0/522240 done 00:08:19.381 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:19.381 Filesystem UUID: 4252a330-fdac-4566-9789-8b802b35c014 00:08:19.381 Superblock backups stored on blocks: 00:08:19.381 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:19.381 00:08:19.381 Allocating group tables: 0/64 done 00:08:19.381 Writing inode tables: 0/64 done 00:08:19.381 Creating journal (8192 blocks): done 00:08:19.381 Writing superblocks and filesystem accounting information: 0/64 done 00:08:19.381 00:08:19.381 23:07:24 -- common/autotest_common.sh@921 -- # return 0 00:08:19.381 23:07:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.381 23:07:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.381 23:07:24 -- target/filesystem.sh@25 -- # sync 00:08:19.381 23:07:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.381 23:07:24 -- target/filesystem.sh@27 -- # sync 00:08:19.381 23:07:24 -- target/filesystem.sh@29 -- # i=0 00:08:19.381 23:07:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.381 23:07:25 -- target/filesystem.sh@37 -- # kill -0 479706 00:08:19.381 23:07:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.381 23:07:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.381 23:07:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.381 23:07:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.381 00:08:19.381 real 0m0.197s 00:08:19.381 user 0m0.027s 00:08:19.381 sys 0m0.075s 00:08:19.381 23:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.381 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.381 ************************************ 00:08:19.381 END TEST filesystem_ext4 00:08:19.381 ************************************ 00:08:19.381 23:07:25 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:19.381 23:07:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.381 23:07:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.381 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.381 ************************************ 00:08:19.381 START TEST filesystem_btrfs 00:08:19.381 ************************************ 00:08:19.381 23:07:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:19.381 23:07:25 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:19.381 23:07:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.381 23:07:25 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:19.381 23:07:25 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:19.381 23:07:25 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.381 23:07:25 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.381 23:07:25 -- common/autotest_common.sh@905 -- # local force 00:08:19.381 23:07:25 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:19.381 23:07:25 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.381 23:07:25 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:19.641 btrfs-progs v6.8.1 00:08:19.641 See https://btrfs.readthedocs.io for more information. 00:08:19.641 00:08:19.641 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:19.641 NOTE: several default settings have changed in version 5.15, please make sure 00:08:19.641 this does not affect your deployments: 00:08:19.641 - DUP for metadata (-m dup) 00:08:19.641 - enabled no-holes (-O no-holes) 00:08:19.641 - enabled free-space-tree (-R free-space-tree) 00:08:19.641 00:08:19.641 Label: (null) 00:08:19.641 UUID: fda9a926-bc57-44c0-bbc7-f3f081208dda 00:08:19.641 Node size: 16384 00:08:19.641 Sector size: 4096 (CPU page size: 4096) 00:08:19.641 Filesystem size: 510.00MiB 00:08:19.641 Block group profiles: 00:08:19.641 Data: single 8.00MiB 00:08:19.641 Metadata: DUP 32.00MiB 00:08:19.641 System: DUP 8.00MiB 00:08:19.641 SSD detected: yes 00:08:19.641 Zoned device: no 00:08:19.641 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:19.641 Checksum: crc32c 00:08:19.641 Number of devices: 1 00:08:19.641 Devices: 00:08:19.641 ID SIZE PATH 00:08:19.641 1 510.00MiB /dev/nvme0n1p1 00:08:19.641 00:08:19.641 23:07:25 -- common/autotest_common.sh@921 -- # return 0 00:08:19.641 23:07:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.641 23:07:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.641 23:07:25 -- target/filesystem.sh@25 -- # sync 00:08:19.641 23:07:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.641 23:07:25 -- target/filesystem.sh@27 -- # sync 00:08:19.641 23:07:25 -- target/filesystem.sh@29 -- # i=0 00:08:19.641 23:07:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.641 23:07:25 -- target/filesystem.sh@37 -- # kill -0 479706 00:08:19.641 23:07:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.641 23:07:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.641 23:07:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.641 23:07:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.641 00:08:19.641 real 0m0.249s 00:08:19.641 user 0m0.030s 00:08:19.641 sys 0m0.129s 00:08:19.641 23:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.641 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.641 ************************************ 00:08:19.641 END TEST filesystem_btrfs 00:08:19.641 ************************************ 00:08:19.641 23:07:25 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.641 23:07:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.641 23:07:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.641 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.901 ************************************ 00:08:19.901 START TEST filesystem_xfs 00:08:19.901 ************************************ 00:08:19.901 23:07:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.901 23:07:25 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.901 23:07:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.901 23:07:25 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.901 23:07:25 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:19.901 23:07:25 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.901 23:07:25 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.901 23:07:25 -- common/autotest_common.sh@905 -- # local force 00:08:19.901 23:07:25 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:19.901 23:07:25 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.901 23:07:25 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:19.901 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:19.901 = sectsz=512 attr=2, projid32bit=1 00:08:19.901 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:19.901 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:19.901 data = bsize=4096 blocks=130560, imaxpct=25 00:08:19.901 = sunit=0 swidth=0 blks 00:08:19.901 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:19.901 log =internal log bsize=4096 blocks=16384, version=2 00:08:19.901 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:19.901 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:19.901 Discarding blocks...Done. 00:08:19.901 23:07:25 -- common/autotest_common.sh@921 -- # return 0 00:08:19.901 23:07:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.901 23:07:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.901 23:07:25 -- target/filesystem.sh@25 -- # sync 00:08:19.901 23:07:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.901 23:07:25 -- target/filesystem.sh@27 -- # sync 00:08:19.901 23:07:25 -- target/filesystem.sh@29 -- # i=0 00:08:19.901 23:07:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.901 23:07:25 -- target/filesystem.sh@37 -- # kill -0 479706 00:08:19.901 23:07:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.901 23:07:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.901 23:07:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.901 23:07:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.901 00:08:19.901 real 0m0.210s 00:08:19.901 user 0m0.031s 00:08:19.901 sys 0m0.079s 00:08:19.901 23:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.901 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.901 ************************************ 00:08:19.901 END TEST filesystem_xfs 00:08:19.901 ************************************ 00:08:19.901 23:07:25 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:20.160 23:07:25 -- target/filesystem.sh@93 -- # sync 00:08:20.160 23:07:25 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.098 23:07:26 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.098 23:07:26 -- common/autotest_common.sh@1198 -- # local i=0 00:08:21.098 23:07:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:21.098 23:07:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.098 23:07:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:21.098 23:07:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.098 23:07:26 -- common/autotest_common.sh@1210 -- # return 0 00:08:21.098 23:07:26 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.098 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.098 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:08:21.098 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.098 23:07:26 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.098 23:07:26 -- target/filesystem.sh@101 -- # killprocess 479706 00:08:21.098 23:07:26 -- common/autotest_common.sh@926 -- # '[' -z 479706 ']' 00:08:21.098 23:07:26 -- common/autotest_common.sh@930 -- # kill -0 479706 00:08:21.098 23:07:26 -- common/autotest_common.sh@931 -- # uname 00:08:21.098 23:07:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:21.098 23:07:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 479706 00:08:21.098 23:07:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:21.098 23:07:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:21.098 23:07:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 479706' 00:08:21.098 killing process with pid 479706 00:08:21.098 23:07:26 -- common/autotest_common.sh@945 -- # kill 479706 00:08:21.098 23:07:26 -- common/autotest_common.sh@950 -- # wait 479706 00:08:21.667 23:07:27 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:21.667 00:08:21.667 real 0m7.872s 00:08:21.667 user 0m30.674s 00:08:21.667 sys 0m1.159s 00:08:21.667 23:07:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.667 23:07:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.667 ************************************ 00:08:21.667 END TEST nvmf_filesystem_no_in_capsule 00:08:21.667 ************************************ 00:08:21.667 23:07:27 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:21.667 23:07:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:21.667 23:07:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.667 23:07:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.667 ************************************ 00:08:21.667 START TEST nvmf_filesystem_in_capsule 00:08:21.667 ************************************ 00:08:21.667 23:07:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:21.667 23:07:27 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:21.667 23:07:27 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:21.667 23:07:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:21.667 23:07:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:21.667 23:07:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.667 23:07:27 -- nvmf/common.sh@469 -- # nvmfpid=481277 00:08:21.667 23:07:27 -- nvmf/common.sh@470 -- # waitforlisten 481277 00:08:21.667 23:07:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.667 23:07:27 -- common/autotest_common.sh@819 -- # '[' -z 481277 ']' 00:08:21.667 23:07:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.667 23:07:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:21.667 23:07:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.667 23:07:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:21.667 23:07:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.667 [2024-11-02 23:07:27.278827] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:21.667 [2024-11-02 23:07:27.278875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.667 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.667 [2024-11-02 23:07:27.347225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.667 [2024-11-02 23:07:27.420830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.667 [2024-11-02 23:07:27.420939] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.667 [2024-11-02 23:07:27.420949] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.667 [2024-11-02 23:07:27.420958] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.667 [2024-11-02 23:07:27.421015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.667 [2024-11-02 23:07:27.421130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.667 [2024-11-02 23:07:27.421192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.667 [2024-11-02 23:07:27.421194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.605 23:07:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:22.605 23:07:28 -- common/autotest_common.sh@852 -- # return 0 00:08:22.605 23:07:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:22.605 23:07:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:22.605 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.605 23:07:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.605 23:07:28 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:22.605 23:07:28 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:22.605 23:07:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.605 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.605 [2024-11-02 23:07:28.186321] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fb1090/0x1fb5580) succeed. 00:08:22.605 [2024-11-02 23:07:28.195661] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fb2680/0x1ff6c20) succeed. 00:08:22.605 23:07:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.605 23:07:28 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:22.605 23:07:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.605 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.863 Malloc1 00:08:22.863 23:07:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.863 23:07:28 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.863 23:07:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.863 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.863 23:07:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.863 23:07:28 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.863 23:07:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.863 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.864 23:07:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.864 23:07:28 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:22.864 23:07:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.864 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.864 [2024-11-02 23:07:28.463543] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:22.864 23:07:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.864 23:07:28 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:22.864 23:07:28 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:22.864 23:07:28 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:22.864 23:07:28 -- common/autotest_common.sh@1359 -- # local bs 00:08:22.864 23:07:28 -- common/autotest_common.sh@1360 -- # local nb 00:08:22.864 23:07:28 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:22.864 23:07:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.864 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.864 23:07:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.864 23:07:28 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:22.864 { 00:08:22.864 "name": "Malloc1", 00:08:22.864 "aliases": [ 00:08:22.864 "e437deeb-8f9b-4bc4-b508-ecc60600d9e7" 00:08:22.864 ], 00:08:22.864 "product_name": "Malloc disk", 00:08:22.864 "block_size": 512, 00:08:22.864 "num_blocks": 1048576, 00:08:22.864 "uuid": "e437deeb-8f9b-4bc4-b508-ecc60600d9e7", 00:08:22.864 "assigned_rate_limits": { 00:08:22.864 "rw_ios_per_sec": 0, 00:08:22.864 "rw_mbytes_per_sec": 0, 00:08:22.864 "r_mbytes_per_sec": 0, 00:08:22.864 "w_mbytes_per_sec": 0 00:08:22.864 }, 00:08:22.864 "claimed": true, 00:08:22.864 "claim_type": "exclusive_write", 00:08:22.864 "zoned": false, 00:08:22.864 "supported_io_types": { 00:08:22.864 "read": true, 00:08:22.864 "write": true, 00:08:22.864 "unmap": true, 00:08:22.864 "write_zeroes": true, 00:08:22.864 "flush": true, 00:08:22.864 "reset": true, 00:08:22.864 "compare": false, 00:08:22.864 "compare_and_write": false, 00:08:22.864 "abort": true, 00:08:22.864 "nvme_admin": false, 00:08:22.864 "nvme_io": false 00:08:22.864 }, 00:08:22.864 "memory_domains": [ 00:08:22.864 { 00:08:22.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.864 "dma_device_type": 2 00:08:22.864 } 00:08:22.864 ], 00:08:22.864 "driver_specific": {} 00:08:22.864 } 00:08:22.864 ]' 00:08:22.864 23:07:28 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:22.864 23:07:28 -- common/autotest_common.sh@1362 -- # bs=512 00:08:22.864 23:07:28 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:22.864 23:07:28 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:22.864 23:07:28 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:22.864 23:07:28 -- common/autotest_common.sh@1367 -- # echo 512 00:08:22.864 23:07:28 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:22.864 23:07:28 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:24.239 23:07:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.239 23:07:29 -- common/autotest_common.sh@1177 -- # local i=0 00:08:24.239 23:07:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.239 23:07:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:24.239 23:07:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:26.145 23:07:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:26.145 23:07:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:26.145 23:07:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.145 23:07:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:26.145 23:07:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.145 23:07:31 -- common/autotest_common.sh@1187 -- # return 0 00:08:26.145 23:07:31 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:26.145 23:07:31 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:26.145 23:07:31 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:26.145 23:07:31 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:26.145 23:07:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:26.145 23:07:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:26.145 23:07:31 -- setup/common.sh@80 -- # echo 536870912 00:08:26.145 23:07:31 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:26.145 23:07:31 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:26.145 23:07:31 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:26.145 23:07:31 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:26.145 23:07:31 -- target/filesystem.sh@69 -- # partprobe 00:08:26.145 23:07:31 -- target/filesystem.sh@70 -- # sleep 1 00:08:27.083 23:07:32 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:27.083 23:07:32 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:27.083 23:07:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:27.083 23:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.083 23:07:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.083 ************************************ 00:08:27.083 START TEST filesystem_in_capsule_ext4 00:08:27.083 ************************************ 00:08:27.083 23:07:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:27.083 23:07:32 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:27.083 23:07:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.083 23:07:32 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:27.083 23:07:32 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:27.083 23:07:32 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:27.083 23:07:32 -- common/autotest_common.sh@904 -- # local i=0 00:08:27.083 23:07:32 -- common/autotest_common.sh@905 -- # local force 00:08:27.083 23:07:32 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:27.083 23:07:32 -- common/autotest_common.sh@908 -- # force=-F 00:08:27.083 23:07:32 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:27.083 mke2fs 1.47.0 (5-Feb-2023) 00:08:27.341 Discarding device blocks: 0/522240 done 00:08:27.341 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:27.341 Filesystem UUID: 049abdda-f39c-4789-9835-bd1cd16df686 00:08:27.341 Superblock backups stored on blocks: 00:08:27.341 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:27.341 00:08:27.341 Allocating group tables: 0/64 done 00:08:27.341 Writing inode tables: 0/64 done 00:08:27.341 Creating journal (8192 blocks): done 00:08:27.341 Writing superblocks and filesystem accounting information: 0/64 done 00:08:27.341 00:08:27.341 23:07:32 -- common/autotest_common.sh@921 -- # return 0 00:08:27.341 23:07:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.341 23:07:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.341 23:07:32 -- target/filesystem.sh@25 -- # sync 00:08:27.341 23:07:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.341 23:07:32 -- target/filesystem.sh@27 -- # sync 00:08:27.341 23:07:32 -- target/filesystem.sh@29 -- # i=0 00:08:27.341 23:07:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.341 23:07:32 -- target/filesystem.sh@37 -- # kill -0 481277 00:08:27.341 23:07:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.341 23:07:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.341 23:07:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.341 23:07:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.341 00:08:27.341 real 0m0.198s 00:08:27.341 user 0m0.022s 00:08:27.341 sys 0m0.085s 00:08:27.341 23:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.341 23:07:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.341 ************************************ 00:08:27.341 END TEST filesystem_in_capsule_ext4 00:08:27.341 ************************************ 00:08:27.342 23:07:33 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:27.342 23:07:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:27.342 23:07:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.342 23:07:33 -- common/autotest_common.sh@10 -- # set +x 00:08:27.342 ************************************ 00:08:27.342 START TEST filesystem_in_capsule_btrfs 00:08:27.342 ************************************ 00:08:27.342 23:07:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:27.342 23:07:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:27.342 23:07:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.342 23:07:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:27.342 23:07:33 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:27.342 23:07:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:27.342 23:07:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:27.342 23:07:33 -- common/autotest_common.sh@905 -- # local force 00:08:27.342 23:07:33 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:27.342 23:07:33 -- common/autotest_common.sh@910 -- # force=-f 00:08:27.342 23:07:33 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:27.601 btrfs-progs v6.8.1 00:08:27.601 See https://btrfs.readthedocs.io for more information. 00:08:27.601 00:08:27.601 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:27.601 NOTE: several default settings have changed in version 5.15, please make sure 00:08:27.601 this does not affect your deployments: 00:08:27.601 - DUP for metadata (-m dup) 00:08:27.601 - enabled no-holes (-O no-holes) 00:08:27.601 - enabled free-space-tree (-R free-space-tree) 00:08:27.601 00:08:27.601 Label: (null) 00:08:27.601 UUID: 3da5a760-1e8d-4b68-a8b1-d4d3994eb3b2 00:08:27.601 Node size: 16384 00:08:27.601 Sector size: 4096 (CPU page size: 4096) 00:08:27.601 Filesystem size: 510.00MiB 00:08:27.601 Block group profiles: 00:08:27.601 Data: single 8.00MiB 00:08:27.601 Metadata: DUP 32.00MiB 00:08:27.601 System: DUP 8.00MiB 00:08:27.601 SSD detected: yes 00:08:27.601 Zoned device: no 00:08:27.601 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:27.601 Checksum: crc32c 00:08:27.601 Number of devices: 1 00:08:27.601 Devices: 00:08:27.601 ID SIZE PATH 00:08:27.601 1 510.00MiB /dev/nvme0n1p1 00:08:27.601 00:08:27.601 23:07:33 -- common/autotest_common.sh@921 -- # return 0 00:08:27.601 23:07:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.601 23:07:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.601 23:07:33 -- target/filesystem.sh@25 -- # sync 00:08:27.601 23:07:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.601 23:07:33 -- target/filesystem.sh@27 -- # sync 00:08:27.601 23:07:33 -- target/filesystem.sh@29 -- # i=0 00:08:27.601 23:07:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.601 23:07:33 -- target/filesystem.sh@37 -- # kill -0 481277 00:08:27.601 23:07:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.601 23:07:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.601 23:07:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.601 23:07:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.601 00:08:27.601 real 0m0.252s 00:08:27.601 user 0m0.037s 00:08:27.601 sys 0m0.118s 00:08:27.601 23:07:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.601 23:07:33 -- common/autotest_common.sh@10 -- # set +x 00:08:27.601 ************************************ 00:08:27.601 END TEST filesystem_in_capsule_btrfs 00:08:27.601 ************************************ 00:08:27.601 23:07:33 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:27.601 23:07:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:27.601 23:07:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.601 23:07:33 -- common/autotest_common.sh@10 -- # set +x 00:08:27.601 ************************************ 00:08:27.601 START TEST filesystem_in_capsule_xfs 00:08:27.601 ************************************ 00:08:27.601 23:07:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:27.601 23:07:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:27.601 23:07:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.601 23:07:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:27.601 23:07:33 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:27.601 23:07:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:27.601 23:07:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:27.601 23:07:33 -- common/autotest_common.sh@905 -- # local force 00:08:27.601 23:07:33 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:27.601 23:07:33 -- common/autotest_common.sh@910 -- # force=-f 00:08:27.601 23:07:33 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:27.861 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:27.861 = sectsz=512 attr=2, projid32bit=1 00:08:27.861 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:27.861 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:27.861 data = bsize=4096 blocks=130560, imaxpct=25 00:08:27.861 = sunit=0 swidth=0 blks 00:08:27.861 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:27.861 log =internal log bsize=4096 blocks=16384, version=2 00:08:27.861 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:27.861 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:27.861 Discarding blocks...Done. 00:08:27.861 23:07:33 -- common/autotest_common.sh@921 -- # return 0 00:08:27.861 23:07:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.861 23:07:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.861 23:07:33 -- target/filesystem.sh@25 -- # sync 00:08:27.861 23:07:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.861 23:07:33 -- target/filesystem.sh@27 -- # sync 00:08:27.861 23:07:33 -- target/filesystem.sh@29 -- # i=0 00:08:27.861 23:07:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.861 23:07:33 -- target/filesystem.sh@37 -- # kill -0 481277 00:08:27.861 23:07:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.861 23:07:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.861 23:07:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.861 23:07:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.861 00:08:27.861 real 0m0.213s 00:08:27.861 user 0m0.029s 00:08:27.861 sys 0m0.082s 00:08:27.861 23:07:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.861 23:07:33 -- common/autotest_common.sh@10 -- # set +x 00:08:27.861 ************************************ 00:08:27.861 END TEST filesystem_in_capsule_xfs 00:08:27.861 ************************************ 00:08:27.861 23:07:33 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:27.861 23:07:33 -- target/filesystem.sh@93 -- # sync 00:08:27.861 23:07:33 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.239 23:07:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.239 23:07:34 -- common/autotest_common.sh@1198 -- # local i=0 00:08:29.239 23:07:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:29.239 23:07:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.239 23:07:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:29.239 23:07:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.239 23:07:34 -- common/autotest_common.sh@1210 -- # return 0 00:08:29.239 23:07:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.239 23:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.239 23:07:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.239 23:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.239 23:07:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:29.239 23:07:34 -- target/filesystem.sh@101 -- # killprocess 481277 00:08:29.239 23:07:34 -- common/autotest_common.sh@926 -- # '[' -z 481277 ']' 00:08:29.239 23:07:34 -- common/autotest_common.sh@930 -- # kill -0 481277 00:08:29.239 23:07:34 -- common/autotest_common.sh@931 -- # uname 00:08:29.239 23:07:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:29.239 23:07:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 481277 00:08:29.239 23:07:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:29.239 23:07:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:29.239 23:07:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 481277' 00:08:29.239 killing process with pid 481277 00:08:29.239 23:07:34 -- common/autotest_common.sh@945 -- # kill 481277 00:08:29.239 23:07:34 -- common/autotest_common.sh@950 -- # wait 481277 00:08:29.499 23:07:35 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:29.499 00:08:29.499 real 0m7.918s 00:08:29.499 user 0m30.767s 00:08:29.499 sys 0m1.211s 00:08:29.499 23:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.499 23:07:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.499 ************************************ 00:08:29.499 END TEST nvmf_filesystem_in_capsule 00:08:29.499 ************************************ 00:08:29.499 23:07:35 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:29.499 23:07:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:29.499 23:07:35 -- nvmf/common.sh@116 -- # sync 00:08:29.499 23:07:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:29.499 23:07:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:29.499 23:07:35 -- nvmf/common.sh@119 -- # set +e 00:08:29.499 23:07:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:29.499 23:07:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:29.499 rmmod nvme_rdma 00:08:29.499 rmmod nvme_fabrics 00:08:29.499 23:07:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:29.499 23:07:35 -- nvmf/common.sh@123 -- # set -e 00:08:29.499 23:07:35 -- nvmf/common.sh@124 -- # return 0 00:08:29.499 23:07:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:29.499 23:07:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:29.499 23:07:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:29.499 00:08:29.499 real 0m22.551s 00:08:29.499 user 1m3.301s 00:08:29.499 sys 0m7.443s 00:08:29.499 23:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.499 23:07:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.499 ************************************ 00:08:29.499 END TEST nvmf_filesystem 00:08:29.499 ************************************ 00:08:29.758 23:07:35 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:29.758 23:07:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:29.758 23:07:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.758 23:07:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.758 ************************************ 00:08:29.758 START TEST nvmf_discovery 00:08:29.758 ************************************ 00:08:29.758 23:07:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:29.758 * Looking for test storage... 00:08:29.758 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:29.758 23:07:35 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.758 23:07:35 -- nvmf/common.sh@7 -- # uname -s 00:08:29.758 23:07:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.758 23:07:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.758 23:07:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.758 23:07:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.758 23:07:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.758 23:07:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.758 23:07:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.758 23:07:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.758 23:07:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.758 23:07:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.758 23:07:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:29.758 23:07:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:29.758 23:07:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.758 23:07:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.758 23:07:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.758 23:07:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:29.758 23:07:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.758 23:07:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.758 23:07:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.759 23:07:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.759 23:07:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.759 23:07:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.759 23:07:35 -- paths/export.sh@5 -- # export PATH 00:08:29.759 23:07:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.759 23:07:35 -- nvmf/common.sh@46 -- # : 0 00:08:29.759 23:07:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:29.759 23:07:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:29.759 23:07:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:29.759 23:07:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.759 23:07:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.759 23:07:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:29.759 23:07:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:29.759 23:07:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:29.759 23:07:35 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:29.759 23:07:35 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:29.759 23:07:35 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:29.759 23:07:35 -- target/discovery.sh@15 -- # hash nvme 00:08:29.759 23:07:35 -- target/discovery.sh@20 -- # nvmftestinit 00:08:29.759 23:07:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:29.759 23:07:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.759 23:07:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:29.759 23:07:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:29.759 23:07:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:29.759 23:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.759 23:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.759 23:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.759 23:07:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:29.759 23:07:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:29.759 23:07:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:29.759 23:07:35 -- common/autotest_common.sh@10 -- # set +x 00:08:36.332 23:07:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:36.332 23:07:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:36.332 23:07:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:36.332 23:07:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:36.332 23:07:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:36.332 23:07:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:36.332 23:07:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:36.332 23:07:42 -- nvmf/common.sh@294 -- # net_devs=() 00:08:36.332 23:07:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:36.332 23:07:42 -- nvmf/common.sh@295 -- # e810=() 00:08:36.332 23:07:42 -- nvmf/common.sh@295 -- # local -ga e810 00:08:36.332 23:07:42 -- nvmf/common.sh@296 -- # x722=() 00:08:36.332 23:07:42 -- nvmf/common.sh@296 -- # local -ga x722 00:08:36.332 23:07:42 -- nvmf/common.sh@297 -- # mlx=() 00:08:36.332 23:07:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:36.332 23:07:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.332 23:07:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.333 23:07:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.333 23:07:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.333 23:07:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:36.333 23:07:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:36.333 23:07:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:36.333 23:07:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:36.333 23:07:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:36.333 23:07:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:36.333 23:07:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:36.333 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:36.333 23:07:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:36.333 23:07:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:36.333 23:07:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:36.333 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:36.333 23:07:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:36.333 23:07:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:36.333 23:07:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:36.333 23:07:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.333 23:07:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:36.333 23:07:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.333 23:07:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:36.333 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:36.333 23:07:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.333 23:07:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:36.333 23:07:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.333 23:07:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:36.333 23:07:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.333 23:07:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:36.333 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:36.333 23:07:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.333 23:07:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:36.333 23:07:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:36.333 23:07:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:36.333 23:07:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:36.333 23:07:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:36.333 23:07:42 -- nvmf/common.sh@57 -- # uname 00:08:36.333 23:07:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:36.333 23:07:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:36.333 23:07:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:36.592 23:07:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:36.592 23:07:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:36.592 23:07:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:36.592 23:07:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:36.592 23:07:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:36.592 23:07:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:36.592 23:07:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:36.592 23:07:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:36.592 23:07:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:36.592 23:07:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:36.592 23:07:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:36.592 23:07:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:36.592 23:07:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:36.592 23:07:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:36.592 23:07:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.592 23:07:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:36.592 23:07:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:36.592 23:07:42 -- nvmf/common.sh@104 -- # continue 2 00:08:36.592 23:07:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:36.592 23:07:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.592 23:07:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:36.592 23:07:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.592 23:07:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:36.592 23:07:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:36.592 23:07:42 -- nvmf/common.sh@104 -- # continue 2 00:08:36.592 23:07:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:36.592 23:07:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:36.592 23:07:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:36.592 23:07:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:36.592 23:07:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:36.592 23:07:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:36.592 23:07:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:36.592 23:07:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:36.592 23:07:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:36.592 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:36.592 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:36.592 altname enp217s0f0np0 00:08:36.592 altname ens818f0np0 00:08:36.592 inet 192.168.100.8/24 scope global mlx_0_0 00:08:36.592 valid_lft forever preferred_lft forever 00:08:36.592 23:07:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:36.592 23:07:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:36.592 23:07:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:36.592 23:07:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:36.592 23:07:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:36.592 23:07:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:36.592 23:07:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:36.592 23:07:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:36.592 23:07:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:36.592 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:36.592 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:36.592 altname enp217s0f1np1 00:08:36.592 altname ens818f1np1 00:08:36.592 inet 192.168.100.9/24 scope global mlx_0_1 00:08:36.592 valid_lft forever preferred_lft forever 00:08:36.592 23:07:42 -- nvmf/common.sh@410 -- # return 0 00:08:36.592 23:07:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:36.592 23:07:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:36.592 23:07:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:36.592 23:07:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:36.592 23:07:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:36.592 23:07:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:36.592 23:07:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:36.592 23:07:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:36.592 23:07:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:36.593 23:07:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:36.593 23:07:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:36.593 23:07:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.593 23:07:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:36.593 23:07:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:36.593 23:07:42 -- nvmf/common.sh@104 -- # continue 2 00:08:36.593 23:07:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:36.593 23:07:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.593 23:07:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:36.593 23:07:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.593 23:07:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:36.593 23:07:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:36.593 23:07:42 -- nvmf/common.sh@104 -- # continue 2 00:08:36.593 23:07:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:36.593 23:07:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:36.593 23:07:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:36.593 23:07:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:36.593 23:07:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:36.593 23:07:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:36.593 23:07:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:36.593 23:07:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:36.593 23:07:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:36.593 23:07:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:36.593 23:07:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:36.593 23:07:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:36.593 23:07:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:36.593 192.168.100.9' 00:08:36.593 23:07:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:36.593 192.168.100.9' 00:08:36.593 23:07:42 -- nvmf/common.sh@445 -- # head -n 1 00:08:36.593 23:07:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:36.593 23:07:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:36.593 192.168.100.9' 00:08:36.593 23:07:42 -- nvmf/common.sh@446 -- # tail -n +2 00:08:36.593 23:07:42 -- nvmf/common.sh@446 -- # head -n 1 00:08:36.593 23:07:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:36.593 23:07:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:36.593 23:07:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:36.593 23:07:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:36.593 23:07:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:36.593 23:07:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:36.593 23:07:42 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:36.593 23:07:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:36.593 23:07:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:36.593 23:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:36.593 23:07:42 -- nvmf/common.sh@469 -- # nvmfpid=486261 00:08:36.593 23:07:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.593 23:07:42 -- nvmf/common.sh@470 -- # waitforlisten 486261 00:08:36.593 23:07:42 -- common/autotest_common.sh@819 -- # '[' -z 486261 ']' 00:08:36.593 23:07:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.593 23:07:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:36.593 23:07:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.593 23:07:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:36.593 23:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:36.853 [2024-11-02 23:07:42.372122] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:36.853 [2024-11-02 23:07:42.372172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.853 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.853 [2024-11-02 23:07:42.444475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.853 [2024-11-02 23:07:42.516762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.853 [2024-11-02 23:07:42.516868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.853 [2024-11-02 23:07:42.516877] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.853 [2024-11-02 23:07:42.516886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.853 [2024-11-02 23:07:42.516942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.853 [2024-11-02 23:07:42.517045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.853 [2024-11-02 23:07:42.517066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.853 [2024-11-02 23:07:42.517068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.791 23:07:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:37.791 23:07:43 -- common/autotest_common.sh@852 -- # return 0 00:08:37.791 23:07:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:37.791 23:07:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.791 23:07:43 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 [2024-11-02 23:07:43.255713] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeeb090/0xeef580) succeed. 00:08:37.791 [2024-11-02 23:07:43.264813] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeec680/0xf30c20) succeed. 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@26 -- # seq 1 4 00:08:37.791 23:07:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.791 23:07:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 Null1 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 [2024-11-02 23:07:43.431705] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.791 23:07:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 Null2 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.791 23:07:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 Null3 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.791 23:07:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 Null4 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:37.791 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.791 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.791 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.791 23:07:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:37.792 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.792 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.792 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.792 23:07:43 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:37.792 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.792 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.792 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.792 23:07:43 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:37.792 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.792 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.051 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.051 23:07:43 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:38.051 00:08:38.051 Discovery Log Number of Records 6, Generation counter 6 00:08:38.051 =====Discovery Log Entry 0====== 00:08:38.051 trtype: rdma 00:08:38.051 adrfam: ipv4 00:08:38.051 subtype: current discovery subsystem 00:08:38.051 treq: not required 00:08:38.051 portid: 0 00:08:38.051 trsvcid: 4420 00:08:38.051 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.051 traddr: 192.168.100.8 00:08:38.051 eflags: explicit discovery connections, duplicate discovery information 00:08:38.051 rdma_prtype: not specified 00:08:38.051 rdma_qptype: connected 00:08:38.051 rdma_cms: rdma-cm 00:08:38.051 rdma_pkey: 0x0000 00:08:38.051 =====Discovery Log Entry 1====== 00:08:38.051 trtype: rdma 00:08:38.051 adrfam: ipv4 00:08:38.051 subtype: nvme subsystem 00:08:38.051 treq: not required 00:08:38.051 portid: 0 00:08:38.051 trsvcid: 4420 00:08:38.051 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:38.051 traddr: 192.168.100.8 00:08:38.051 eflags: none 00:08:38.051 rdma_prtype: not specified 00:08:38.051 rdma_qptype: connected 00:08:38.051 rdma_cms: rdma-cm 00:08:38.051 rdma_pkey: 0x0000 00:08:38.051 =====Discovery Log Entry 2====== 00:08:38.051 trtype: rdma 00:08:38.051 adrfam: ipv4 00:08:38.051 subtype: nvme subsystem 00:08:38.051 treq: not required 00:08:38.051 portid: 0 00:08:38.051 trsvcid: 4420 00:08:38.051 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:38.051 traddr: 192.168.100.8 00:08:38.051 eflags: none 00:08:38.051 rdma_prtype: not specified 00:08:38.051 rdma_qptype: connected 00:08:38.051 rdma_cms: rdma-cm 00:08:38.051 rdma_pkey: 0x0000 00:08:38.051 =====Discovery Log Entry 3====== 00:08:38.051 trtype: rdma 00:08:38.051 adrfam: ipv4 00:08:38.051 subtype: nvme subsystem 00:08:38.051 treq: not required 00:08:38.051 portid: 0 00:08:38.051 trsvcid: 4420 00:08:38.051 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:38.052 traddr: 192.168.100.8 00:08:38.052 eflags: none 00:08:38.052 rdma_prtype: not specified 00:08:38.052 rdma_qptype: connected 00:08:38.052 rdma_cms: rdma-cm 00:08:38.052 rdma_pkey: 0x0000 00:08:38.052 =====Discovery Log Entry 4====== 00:08:38.052 trtype: rdma 00:08:38.052 adrfam: ipv4 00:08:38.052 subtype: nvme subsystem 00:08:38.052 treq: not required 00:08:38.052 portid: 0 00:08:38.052 trsvcid: 4420 00:08:38.052 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:38.052 traddr: 192.168.100.8 00:08:38.052 eflags: none 00:08:38.052 rdma_prtype: not specified 00:08:38.052 rdma_qptype: connected 00:08:38.052 rdma_cms: rdma-cm 00:08:38.052 rdma_pkey: 0x0000 00:08:38.052 =====Discovery Log Entry 5====== 00:08:38.052 trtype: rdma 00:08:38.052 adrfam: ipv4 00:08:38.052 subtype: discovery subsystem referral 00:08:38.052 treq: not required 00:08:38.052 portid: 0 00:08:38.052 trsvcid: 4430 00:08:38.052 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.052 traddr: 192.168.100.8 00:08:38.052 eflags: none 00:08:38.052 rdma_prtype: unrecognized 00:08:38.052 rdma_qptype: unrecognized 00:08:38.052 rdma_cms: unrecognized 00:08:38.052 rdma_pkey: 0x0000 00:08:38.052 23:07:43 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:38.052 Perform nvmf subsystem discovery via RPC 00:08:38.052 23:07:43 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 [2024-11-02 23:07:43.668210] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:38.052 [ 00:08:38.052 { 00:08:38.052 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:38.052 "subtype": "Discovery", 00:08:38.052 "listen_addresses": [ 00:08:38.052 { 00:08:38.052 "transport": "RDMA", 00:08:38.052 "trtype": "RDMA", 00:08:38.052 "adrfam": "IPv4", 00:08:38.052 "traddr": "192.168.100.8", 00:08:38.052 "trsvcid": "4420" 00:08:38.052 } 00:08:38.052 ], 00:08:38.052 "allow_any_host": true, 00:08:38.052 "hosts": [] 00:08:38.052 }, 00:08:38.052 { 00:08:38.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.052 "subtype": "NVMe", 00:08:38.052 "listen_addresses": [ 00:08:38.052 { 00:08:38.052 "transport": "RDMA", 00:08:38.052 "trtype": "RDMA", 00:08:38.052 "adrfam": "IPv4", 00:08:38.052 "traddr": "192.168.100.8", 00:08:38.052 "trsvcid": "4420" 00:08:38.052 } 00:08:38.052 ], 00:08:38.052 "allow_any_host": true, 00:08:38.052 "hosts": [], 00:08:38.052 "serial_number": "SPDK00000000000001", 00:08:38.052 "model_number": "SPDK bdev Controller", 00:08:38.052 "max_namespaces": 32, 00:08:38.052 "min_cntlid": 1, 00:08:38.052 "max_cntlid": 65519, 00:08:38.052 "namespaces": [ 00:08:38.052 { 00:08:38.052 "nsid": 1, 00:08:38.052 "bdev_name": "Null1", 00:08:38.052 "name": "Null1", 00:08:38.052 "nguid": "CE1E7FDA6C664A069DFED7D2C1F1EEE0", 00:08:38.052 "uuid": "ce1e7fda-6c66-4a06-9dfe-d7d2c1f1eee0" 00:08:38.052 } 00:08:38.052 ] 00:08:38.052 }, 00:08:38.052 { 00:08:38.052 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.052 "subtype": "NVMe", 00:08:38.052 "listen_addresses": [ 00:08:38.052 { 00:08:38.052 "transport": "RDMA", 00:08:38.052 "trtype": "RDMA", 00:08:38.052 "adrfam": "IPv4", 00:08:38.052 "traddr": "192.168.100.8", 00:08:38.052 "trsvcid": "4420" 00:08:38.052 } 00:08:38.052 ], 00:08:38.052 "allow_any_host": true, 00:08:38.052 "hosts": [], 00:08:38.052 "serial_number": "SPDK00000000000002", 00:08:38.052 "model_number": "SPDK bdev Controller", 00:08:38.052 "max_namespaces": 32, 00:08:38.052 "min_cntlid": 1, 00:08:38.052 "max_cntlid": 65519, 00:08:38.052 "namespaces": [ 00:08:38.052 { 00:08:38.052 "nsid": 1, 00:08:38.052 "bdev_name": "Null2", 00:08:38.052 "name": "Null2", 00:08:38.052 "nguid": "9E675F26D1524651973E942F74F3D3C5", 00:08:38.052 "uuid": "9e675f26-d152-4651-973e-942f74f3d3c5" 00:08:38.052 } 00:08:38.052 ] 00:08:38.052 }, 00:08:38.052 { 00:08:38.052 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:38.052 "subtype": "NVMe", 00:08:38.052 "listen_addresses": [ 00:08:38.052 { 00:08:38.052 "transport": "RDMA", 00:08:38.052 "trtype": "RDMA", 00:08:38.052 "adrfam": "IPv4", 00:08:38.052 "traddr": "192.168.100.8", 00:08:38.052 "trsvcid": "4420" 00:08:38.052 } 00:08:38.052 ], 00:08:38.052 "allow_any_host": true, 00:08:38.052 "hosts": [], 00:08:38.052 "serial_number": "SPDK00000000000003", 00:08:38.052 "model_number": "SPDK bdev Controller", 00:08:38.052 "max_namespaces": 32, 00:08:38.052 "min_cntlid": 1, 00:08:38.052 "max_cntlid": 65519, 00:08:38.052 "namespaces": [ 00:08:38.052 { 00:08:38.052 "nsid": 1, 00:08:38.052 "bdev_name": "Null3", 00:08:38.052 "name": "Null3", 00:08:38.052 "nguid": "479F0A80CA2644FEB50B8118F4328C82", 00:08:38.052 "uuid": "479f0a80-ca26-44fe-b50b-8118f4328c82" 00:08:38.052 } 00:08:38.052 ] 00:08:38.052 }, 00:08:38.052 { 00:08:38.052 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:38.052 "subtype": "NVMe", 00:08:38.052 "listen_addresses": [ 00:08:38.052 { 00:08:38.052 "transport": "RDMA", 00:08:38.052 "trtype": "RDMA", 00:08:38.052 "adrfam": "IPv4", 00:08:38.052 "traddr": "192.168.100.8", 00:08:38.052 "trsvcid": "4420" 00:08:38.052 } 00:08:38.052 ], 00:08:38.052 "allow_any_host": true, 00:08:38.052 "hosts": [], 00:08:38.052 "serial_number": "SPDK00000000000004", 00:08:38.052 "model_number": "SPDK bdev Controller", 00:08:38.052 "max_namespaces": 32, 00:08:38.052 "min_cntlid": 1, 00:08:38.052 "max_cntlid": 65519, 00:08:38.052 "namespaces": [ 00:08:38.052 { 00:08:38.052 "nsid": 1, 00:08:38.052 "bdev_name": "Null4", 00:08:38.052 "name": "Null4", 00:08:38.052 "nguid": "7A66FD18395F43FB8343842A7DA71816", 00:08:38.052 "uuid": "7a66fd18-395f-43fb-8343-842a7da71816" 00:08:38.052 } 00:08:38.052 ] 00:08:38.052 } 00:08:38.052 ] 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@42 -- # seq 1 4 00:08:38.052 23:07:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.052 23:07:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.052 23:07:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.052 23:07:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.052 23:07:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.052 23:07:43 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:38.052 23:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.052 23:07:43 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:38.052 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 23:07:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.312 23:07:43 -- target/discovery.sh@49 -- # check_bdevs= 00:08:38.312 23:07:43 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:38.312 23:07:43 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:38.312 23:07:43 -- target/discovery.sh@57 -- # nvmftestfini 00:08:38.312 23:07:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:38.312 23:07:43 -- nvmf/common.sh@116 -- # sync 00:08:38.312 23:07:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:38.312 23:07:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:38.312 23:07:43 -- nvmf/common.sh@119 -- # set +e 00:08:38.312 23:07:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:38.312 23:07:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:38.312 rmmod nvme_rdma 00:08:38.312 rmmod nvme_fabrics 00:08:38.312 23:07:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:38.312 23:07:43 -- nvmf/common.sh@123 -- # set -e 00:08:38.312 23:07:43 -- nvmf/common.sh@124 -- # return 0 00:08:38.312 23:07:43 -- nvmf/common.sh@477 -- # '[' -n 486261 ']' 00:08:38.312 23:07:43 -- nvmf/common.sh@478 -- # killprocess 486261 00:08:38.312 23:07:43 -- common/autotest_common.sh@926 -- # '[' -z 486261 ']' 00:08:38.312 23:07:43 -- common/autotest_common.sh@930 -- # kill -0 486261 00:08:38.312 23:07:43 -- common/autotest_common.sh@931 -- # uname 00:08:38.312 23:07:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:38.312 23:07:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 486261 00:08:38.312 23:07:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:38.312 23:07:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:38.312 23:07:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 486261' 00:08:38.312 killing process with pid 486261 00:08:38.312 23:07:43 -- common/autotest_common.sh@945 -- # kill 486261 00:08:38.312 [2024-11-02 23:07:43.963548] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:38.312 23:07:43 -- common/autotest_common.sh@950 -- # wait 486261 00:08:38.584 23:07:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:38.584 23:07:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:38.584 00:08:38.584 real 0m8.943s 00:08:38.584 user 0m8.742s 00:08:38.584 sys 0m5.794s 00:08:38.584 23:07:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.584 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.584 ************************************ 00:08:38.584 END TEST nvmf_discovery 00:08:38.584 ************************************ 00:08:38.584 23:07:44 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:38.584 23:07:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:38.584 23:07:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.584 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.585 ************************************ 00:08:38.585 START TEST nvmf_referrals 00:08:38.585 ************************************ 00:08:38.585 23:07:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:38.852 * Looking for test storage... 00:08:38.852 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:38.852 23:07:44 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.852 23:07:44 -- nvmf/common.sh@7 -- # uname -s 00:08:38.852 23:07:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.852 23:07:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.852 23:07:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.852 23:07:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.852 23:07:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.853 23:07:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.853 23:07:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.853 23:07:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.853 23:07:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.853 23:07:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.853 23:07:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:38.853 23:07:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:38.853 23:07:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.853 23:07:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.853 23:07:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.853 23:07:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:38.853 23:07:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.853 23:07:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.853 23:07:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.853 23:07:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.853 23:07:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.853 23:07:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.853 23:07:44 -- paths/export.sh@5 -- # export PATH 00:08:38.853 23:07:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.853 23:07:44 -- nvmf/common.sh@46 -- # : 0 00:08:38.853 23:07:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.853 23:07:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.853 23:07:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.853 23:07:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.853 23:07:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.853 23:07:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.853 23:07:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.853 23:07:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.853 23:07:44 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:38.853 23:07:44 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:38.853 23:07:44 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:38.853 23:07:44 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:38.853 23:07:44 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:38.853 23:07:44 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:38.853 23:07:44 -- target/referrals.sh@37 -- # nvmftestinit 00:08:38.853 23:07:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:38.853 23:07:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.853 23:07:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.853 23:07:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.853 23:07:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.853 23:07:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.853 23:07:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.853 23:07:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.853 23:07:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:38.853 23:07:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:38.853 23:07:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:38.853 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:08:45.433 23:07:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:45.433 23:07:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:45.433 23:07:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:45.433 23:07:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:45.433 23:07:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:45.433 23:07:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:45.433 23:07:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:45.433 23:07:50 -- nvmf/common.sh@294 -- # net_devs=() 00:08:45.433 23:07:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:45.433 23:07:50 -- nvmf/common.sh@295 -- # e810=() 00:08:45.433 23:07:50 -- nvmf/common.sh@295 -- # local -ga e810 00:08:45.433 23:07:50 -- nvmf/common.sh@296 -- # x722=() 00:08:45.433 23:07:50 -- nvmf/common.sh@296 -- # local -ga x722 00:08:45.433 23:07:50 -- nvmf/common.sh@297 -- # mlx=() 00:08:45.433 23:07:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:45.433 23:07:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.433 23:07:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:45.433 23:07:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:45.433 23:07:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:45.433 23:07:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:45.433 23:07:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:45.433 23:07:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:45.433 23:07:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:45.433 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:45.433 23:07:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:45.433 23:07:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:45.433 23:07:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:45.433 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:45.433 23:07:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:45.433 23:07:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:45.433 23:07:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:45.433 23:07:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.433 23:07:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:45.433 23:07:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.433 23:07:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:45.433 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:45.433 23:07:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.433 23:07:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:45.433 23:07:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.433 23:07:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:45.433 23:07:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.433 23:07:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:45.433 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:45.433 23:07:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.433 23:07:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:45.433 23:07:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:45.433 23:07:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:45.433 23:07:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:45.434 23:07:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:45.434 23:07:50 -- nvmf/common.sh@57 -- # uname 00:08:45.434 23:07:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:45.434 23:07:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:45.434 23:07:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:45.434 23:07:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:45.434 23:07:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:45.434 23:07:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:45.434 23:07:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:45.434 23:07:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:45.434 23:07:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:45.434 23:07:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:45.434 23:07:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:45.434 23:07:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:45.434 23:07:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:45.434 23:07:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:45.434 23:07:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:45.434 23:07:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:45.434 23:07:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@104 -- # continue 2 00:08:45.434 23:07:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@104 -- # continue 2 00:08:45.434 23:07:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:45.434 23:07:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:45.434 23:07:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:45.434 23:07:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:45.434 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:45.434 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:45.434 altname enp217s0f0np0 00:08:45.434 altname ens818f0np0 00:08:45.434 inet 192.168.100.8/24 scope global mlx_0_0 00:08:45.434 valid_lft forever preferred_lft forever 00:08:45.434 23:07:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:45.434 23:07:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:45.434 23:07:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:45.434 23:07:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:45.434 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:45.434 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:45.434 altname enp217s0f1np1 00:08:45.434 altname ens818f1np1 00:08:45.434 inet 192.168.100.9/24 scope global mlx_0_1 00:08:45.434 valid_lft forever preferred_lft forever 00:08:45.434 23:07:50 -- nvmf/common.sh@410 -- # return 0 00:08:45.434 23:07:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:45.434 23:07:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:45.434 23:07:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:45.434 23:07:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:45.434 23:07:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:45.434 23:07:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:45.434 23:07:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:45.434 23:07:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:45.434 23:07:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:45.434 23:07:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@104 -- # continue 2 00:08:45.434 23:07:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.434 23:07:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:45.434 23:07:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@104 -- # continue 2 00:08:45.434 23:07:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:45.434 23:07:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:45.434 23:07:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:45.434 23:07:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:45.434 23:07:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:45.434 23:07:50 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:45.434 192.168.100.9' 00:08:45.434 23:07:50 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:45.434 192.168.100.9' 00:08:45.434 23:07:50 -- nvmf/common.sh@445 -- # head -n 1 00:08:45.434 23:07:50 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:45.434 23:07:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:45.434 192.168.100.9' 00:08:45.434 23:07:50 -- nvmf/common.sh@446 -- # tail -n +2 00:08:45.434 23:07:50 -- nvmf/common.sh@446 -- # head -n 1 00:08:45.434 23:07:50 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:45.434 23:07:50 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:45.434 23:07:50 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:45.434 23:07:50 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:45.434 23:07:50 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:45.434 23:07:50 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:45.434 23:07:50 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:45.434 23:07:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:45.434 23:07:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:45.434 23:07:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.434 23:07:50 -- nvmf/common.sh@469 -- # nvmfpid=489746 00:08:45.434 23:07:50 -- nvmf/common.sh@470 -- # waitforlisten 489746 00:08:45.434 23:07:50 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.434 23:07:50 -- common/autotest_common.sh@819 -- # '[' -z 489746 ']' 00:08:45.434 23:07:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.434 23:07:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.434 23:07:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.434 23:07:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.434 23:07:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.434 [2024-11-02 23:07:50.926920] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:45.434 [2024-11-02 23:07:50.926980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.434 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.434 [2024-11-02 23:07:50.998560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.434 [2024-11-02 23:07:51.072089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.434 [2024-11-02 23:07:51.072197] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.434 [2024-11-02 23:07:51.072208] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.435 [2024-11-02 23:07:51.072217] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.435 [2024-11-02 23:07:51.072262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.435 [2024-11-02 23:07:51.072358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.435 [2024-11-02 23:07:51.072441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.435 [2024-11-02 23:07:51.072443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.007 23:07:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.007 23:07:51 -- common/autotest_common.sh@852 -- # return 0 00:08:46.007 23:07:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.007 23:07:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.007 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.267 23:07:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.267 23:07:51 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:46.267 23:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.267 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.267 [2024-11-02 23:07:51.820945] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcea090/0xcee580) succeed. 00:08:46.267 [2024-11-02 23:07:51.830149] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xceb680/0xd2fc20) succeed. 00:08:46.267 23:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.267 23:07:51 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:46.267 23:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.267 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.267 [2024-11-02 23:07:51.952763] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:46.267 23:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.267 23:07:51 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:46.267 23:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.267 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.267 23:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.267 23:07:51 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:46.267 23:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.267 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.267 23:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.267 23:07:51 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:46.267 23:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.267 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.267 23:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.267 23:07:51 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.267 23:07:51 -- target/referrals.sh@48 -- # jq length 00:08:46.267 23:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.267 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.267 23:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:46.528 23:07:52 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:46.528 23:07:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:46.528 23:07:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.528 23:07:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:46.528 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.528 23:07:52 -- target/referrals.sh@21 -- # sort 00:08:46.528 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.528 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.528 23:07:52 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:46.528 23:07:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.528 23:07:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.528 23:07:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:46.528 23:07:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.528 23:07:52 -- target/referrals.sh@26 -- # sort 00:08:46.528 23:07:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.528 23:07:52 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:46.528 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.528 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.528 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:46.528 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.528 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.528 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:46.528 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.528 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.528 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.528 23:07:52 -- target/referrals.sh@56 -- # jq length 00:08:46.528 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.528 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.528 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.528 23:07:52 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:46.528 23:07:52 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:46.528 23:07:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.528 23:07:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.528 23:07:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:46.528 23:07:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.528 23:07:52 -- target/referrals.sh@26 -- # sort 00:08:46.788 23:07:52 -- target/referrals.sh@26 -- # echo 00:08:46.788 23:07:52 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:46.788 23:07:52 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:46.788 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.788 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.788 23:07:52 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:46.788 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.788 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.788 23:07:52 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:46.788 23:07:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:46.788 23:07:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.788 23:07:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:46.788 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.788 23:07:52 -- target/referrals.sh@21 -- # sort 00:08:46.788 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.788 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.788 23:07:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:46.788 23:07:52 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:46.788 23:07:52 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:46.788 23:07:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.788 23:07:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.788 23:07:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:46.788 23:07:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.788 23:07:52 -- target/referrals.sh@26 -- # sort 00:08:47.048 23:07:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:47.048 23:07:52 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.048 23:07:52 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:47.048 23:07:52 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:47.048 23:07:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.048 23:07:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.048 23:07:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.048 23:07:52 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:47.048 23:07:52 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.048 23:07:52 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:47.048 23:07:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.048 23:07:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.048 23:07:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.048 23:07:52 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.048 23:07:52 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.048 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.048 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.048 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.048 23:07:52 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:47.048 23:07:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.048 23:07:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.048 23:07:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.048 23:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.048 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.048 23:07:52 -- target/referrals.sh@21 -- # sort 00:08:47.048 23:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.308 23:07:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:47.308 23:07:52 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.308 23:07:52 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:47.308 23:07:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.308 23:07:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.308 23:07:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.308 23:07:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.308 23:07:52 -- target/referrals.sh@26 -- # sort 00:08:47.308 23:07:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:47.308 23:07:52 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.308 23:07:52 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:47.308 23:07:52 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:47.308 23:07:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.308 23:07:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.308 23:07:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.308 23:07:53 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:47.308 23:07:53 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.308 23:07:53 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:47.308 23:07:53 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.308 23:07:53 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.308 23:07:53 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.568 23:07:53 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.568 23:07:53 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:47.568 23:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.568 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.568 23:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.568 23:07:53 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.568 23:07:53 -- target/referrals.sh@82 -- # jq length 00:08:47.568 23:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.568 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.568 23:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.568 23:07:53 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:47.568 23:07:53 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:47.568 23:07:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.568 23:07:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.568 23:07:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.568 23:07:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.568 23:07:53 -- target/referrals.sh@26 -- # sort 00:08:47.568 23:07:53 -- target/referrals.sh@26 -- # echo 00:08:47.568 23:07:53 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:47.568 23:07:53 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:47.568 23:07:53 -- target/referrals.sh@86 -- # nvmftestfini 00:08:47.568 23:07:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:47.568 23:07:53 -- nvmf/common.sh@116 -- # sync 00:08:47.568 23:07:53 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:47.568 23:07:53 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:47.568 23:07:53 -- nvmf/common.sh@119 -- # set +e 00:08:47.568 23:07:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:47.568 23:07:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:47.568 rmmod nvme_rdma 00:08:47.828 rmmod nvme_fabrics 00:08:47.828 23:07:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:47.828 23:07:53 -- nvmf/common.sh@123 -- # set -e 00:08:47.828 23:07:53 -- nvmf/common.sh@124 -- # return 0 00:08:47.828 23:07:53 -- nvmf/common.sh@477 -- # '[' -n 489746 ']' 00:08:47.828 23:07:53 -- nvmf/common.sh@478 -- # killprocess 489746 00:08:47.828 23:07:53 -- common/autotest_common.sh@926 -- # '[' -z 489746 ']' 00:08:47.828 23:07:53 -- common/autotest_common.sh@930 -- # kill -0 489746 00:08:47.828 23:07:53 -- common/autotest_common.sh@931 -- # uname 00:08:47.828 23:07:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.828 23:07:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 489746 00:08:47.828 23:07:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.828 23:07:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.828 23:07:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 489746' 00:08:47.828 killing process with pid 489746 00:08:47.828 23:07:53 -- common/autotest_common.sh@945 -- # kill 489746 00:08:47.828 23:07:53 -- common/autotest_common.sh@950 -- # wait 489746 00:08:48.088 23:07:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.088 23:07:53 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:48.088 00:08:48.088 real 0m9.402s 00:08:48.088 user 0m12.858s 00:08:48.088 sys 0m5.876s 00:08:48.088 23:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.088 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:08:48.088 ************************************ 00:08:48.088 END TEST nvmf_referrals 00:08:48.088 ************************************ 00:08:48.088 23:07:53 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:48.088 23:07:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:48.088 23:07:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.088 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:08:48.088 ************************************ 00:08:48.088 START TEST nvmf_connect_disconnect 00:08:48.088 ************************************ 00:08:48.088 23:07:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:48.088 * Looking for test storage... 00:08:48.088 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:48.088 23:07:53 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.088 23:07:53 -- nvmf/common.sh@7 -- # uname -s 00:08:48.348 23:07:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.348 23:07:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.348 23:07:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.348 23:07:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.348 23:07:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.348 23:07:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.348 23:07:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.348 23:07:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.348 23:07:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.348 23:07:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.348 23:07:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:48.348 23:07:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:48.348 23:07:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.348 23:07:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.348 23:07:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.348 23:07:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:48.348 23:07:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.348 23:07:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.348 23:07:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.348 23:07:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.348 23:07:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.348 23:07:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.348 23:07:53 -- paths/export.sh@5 -- # export PATH 00:08:48.348 23:07:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.348 23:07:53 -- nvmf/common.sh@46 -- # : 0 00:08:48.349 23:07:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:48.349 23:07:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:48.349 23:07:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:48.349 23:07:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.349 23:07:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.349 23:07:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:48.349 23:07:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:48.349 23:07:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:48.349 23:07:53 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.349 23:07:53 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.349 23:07:53 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:48.349 23:07:53 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:48.349 23:07:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.349 23:07:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:48.349 23:07:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:48.349 23:07:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:48.349 23:07:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.349 23:07:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.349 23:07:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.349 23:07:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:48.349 23:07:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:48.349 23:07:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:48.349 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:08:54.923 23:08:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:54.923 23:08:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:54.923 23:08:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:54.923 23:08:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:54.923 23:08:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:54.923 23:08:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:54.923 23:08:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:54.923 23:08:00 -- nvmf/common.sh@294 -- # net_devs=() 00:08:54.923 23:08:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:54.923 23:08:00 -- nvmf/common.sh@295 -- # e810=() 00:08:54.923 23:08:00 -- nvmf/common.sh@295 -- # local -ga e810 00:08:54.923 23:08:00 -- nvmf/common.sh@296 -- # x722=() 00:08:54.923 23:08:00 -- nvmf/common.sh@296 -- # local -ga x722 00:08:54.923 23:08:00 -- nvmf/common.sh@297 -- # mlx=() 00:08:54.923 23:08:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:54.923 23:08:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.923 23:08:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:54.923 23:08:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:54.923 23:08:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:54.923 23:08:00 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:54.923 23:08:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:54.923 23:08:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:54.923 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:54.923 23:08:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.923 23:08:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:54.923 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:54.923 23:08:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.923 23:08:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:54.923 23:08:00 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.923 23:08:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:54.923 23:08:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.923 23:08:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:54.923 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:54.923 23:08:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.923 23:08:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.923 23:08:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:54.923 23:08:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.923 23:08:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:54.923 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:54.923 23:08:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.923 23:08:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:54.923 23:08:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:54.923 23:08:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:54.923 23:08:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:54.923 23:08:00 -- nvmf/common.sh@57 -- # uname 00:08:54.923 23:08:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:54.923 23:08:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:54.923 23:08:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:54.923 23:08:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:54.923 23:08:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:54.923 23:08:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:54.923 23:08:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:54.923 23:08:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:54.923 23:08:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:54.923 23:08:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:54.923 23:08:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:54.923 23:08:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.923 23:08:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:54.923 23:08:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:54.923 23:08:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.923 23:08:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:54.923 23:08:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:54.923 23:08:00 -- nvmf/common.sh@104 -- # continue 2 00:08:54.923 23:08:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.923 23:08:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.923 23:08:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.924 23:08:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.924 23:08:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:54.924 23:08:00 -- nvmf/common.sh@104 -- # continue 2 00:08:54.924 23:08:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:54.924 23:08:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:54.924 23:08:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:54.924 23:08:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:54.924 23:08:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:54.924 23:08:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:54.924 23:08:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:54.924 23:08:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:54.924 23:08:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:54.924 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.924 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:54.924 altname enp217s0f0np0 00:08:54.924 altname ens818f0np0 00:08:54.924 inet 192.168.100.8/24 scope global mlx_0_0 00:08:54.924 valid_lft forever preferred_lft forever 00:08:54.924 23:08:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:54.924 23:08:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:54.924 23:08:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:54.924 23:08:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:54.924 23:08:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:54.924 23:08:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:54.924 23:08:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:54.924 23:08:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:54.924 23:08:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:54.924 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.924 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:54.924 altname enp217s0f1np1 00:08:54.924 altname ens818f1np1 00:08:54.924 inet 192.168.100.9/24 scope global mlx_0_1 00:08:54.924 valid_lft forever preferred_lft forever 00:08:54.924 23:08:00 -- nvmf/common.sh@410 -- # return 0 00:08:54.924 23:08:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:54.924 23:08:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:54.924 23:08:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:54.924 23:08:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:54.924 23:08:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:54.924 23:08:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.924 23:08:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:54.924 23:08:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:54.924 23:08:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:55.184 23:08:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:55.184 23:08:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:55.184 23:08:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.184 23:08:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:55.184 23:08:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:55.184 23:08:00 -- nvmf/common.sh@104 -- # continue 2 00:08:55.184 23:08:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:55.184 23:08:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.184 23:08:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:55.184 23:08:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.184 23:08:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:55.184 23:08:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:55.184 23:08:00 -- nvmf/common.sh@104 -- # continue 2 00:08:55.184 23:08:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:55.184 23:08:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:55.184 23:08:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:55.184 23:08:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:55.184 23:08:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:55.184 23:08:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:55.184 23:08:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:55.184 23:08:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:55.184 23:08:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:55.184 23:08:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:55.184 23:08:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:55.184 23:08:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:55.184 23:08:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:55.184 192.168.100.9' 00:08:55.184 23:08:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:55.184 192.168.100.9' 00:08:55.184 23:08:00 -- nvmf/common.sh@445 -- # head -n 1 00:08:55.184 23:08:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:55.184 23:08:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:55.184 192.168.100.9' 00:08:55.184 23:08:00 -- nvmf/common.sh@446 -- # tail -n +2 00:08:55.184 23:08:00 -- nvmf/common.sh@446 -- # head -n 1 00:08:55.184 23:08:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:55.184 23:08:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:55.184 23:08:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:55.184 23:08:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:55.184 23:08:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:55.184 23:08:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:55.184 23:08:00 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:55.184 23:08:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:55.184 23:08:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:55.184 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:08:55.184 23:08:00 -- nvmf/common.sh@469 -- # nvmfpid=493601 00:08:55.184 23:08:00 -- nvmf/common.sh@470 -- # waitforlisten 493601 00:08:55.184 23:08:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:55.184 23:08:00 -- common/autotest_common.sh@819 -- # '[' -z 493601 ']' 00:08:55.184 23:08:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.184 23:08:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:55.184 23:08:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.184 23:08:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:55.184 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:08:55.184 [2024-11-02 23:08:00.825693] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:55.184 [2024-11-02 23:08:00.825747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.184 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.184 [2024-11-02 23:08:00.896247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.443 [2024-11-02 23:08:00.966809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:55.443 [2024-11-02 23:08:00.966919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.443 [2024-11-02 23:08:00.966928] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.443 [2024-11-02 23:08:00.966937] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.443 [2024-11-02 23:08:00.966986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.443 [2024-11-02 23:08:00.967030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.443 [2024-11-02 23:08:00.967117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.443 [2024-11-02 23:08:00.967119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.038 23:08:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.038 23:08:01 -- common/autotest_common.sh@852 -- # return 0 00:08:56.038 23:08:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:56.038 23:08:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:56.038 23:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.038 23:08:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.038 23:08:01 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:56.038 23:08:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.038 23:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.038 [2024-11-02 23:08:01.719418] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:56.038 [2024-11-02 23:08:01.740360] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbe8090/0xbec580) succeed. 00:08:56.038 [2024-11-02 23:08:01.749523] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbe9680/0xc2dc20) succeed. 00:08:56.297 23:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:56.297 23:08:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.297 23:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.297 23:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.297 23:08:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.297 23:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.297 23:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.297 23:08:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.297 23:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.297 23:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:56.297 23:08:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.297 23:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.297 [2024-11-02 23:08:01.893401] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:56.297 23:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:56.297 23:08:01 -- target/connect_disconnect.sh@34 -- # set +x 00:08:59.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.000 23:13:19 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:14.000 23:13:19 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:14.000 23:13:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:14.000 23:13:19 -- nvmf/common.sh@116 -- # sync 00:14:14.000 23:13:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:14.000 23:13:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:14.000 23:13:19 -- nvmf/common.sh@119 -- # set +e 00:14:14.000 23:13:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:14.000 23:13:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:14.000 rmmod nvme_rdma 00:14:14.000 rmmod nvme_fabrics 00:14:14.000 23:13:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.000 23:13:19 -- nvmf/common.sh@123 -- # set -e 00:14:14.000 23:13:19 -- nvmf/common.sh@124 -- # return 0 00:14:14.000 23:13:19 -- nvmf/common.sh@477 -- # '[' -n 493601 ']' 00:14:14.000 23:13:19 -- nvmf/common.sh@478 -- # killprocess 493601 00:14:14.000 23:13:19 -- common/autotest_common.sh@926 -- # '[' -z 493601 ']' 00:14:14.000 23:13:19 -- common/autotest_common.sh@930 -- # kill -0 493601 00:14:14.000 23:13:19 -- common/autotest_common.sh@931 -- # uname 00:14:14.000 23:13:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:14.000 23:13:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 493601 00:14:14.000 23:13:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:14.000 23:13:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:14.000 23:13:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 493601' 00:14:14.000 killing process with pid 493601 00:14:14.000 23:13:19 -- common/autotest_common.sh@945 -- # kill 493601 00:14:14.000 23:13:19 -- common/autotest_common.sh@950 -- # wait 493601 00:14:14.260 23:13:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:14.260 23:13:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:14.260 00:14:14.260 real 5m26.141s 00:14:14.260 user 21m12.858s 00:14:14.260 sys 0m17.999s 00:14:14.260 23:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.260 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:14.260 ************************************ 00:14:14.260 END TEST nvmf_connect_disconnect 00:14:14.260 ************************************ 00:14:14.260 23:13:19 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:14.260 23:13:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:14.260 23:13:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:14.260 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:14:14.260 ************************************ 00:14:14.260 START TEST nvmf_multitarget 00:14:14.260 ************************************ 00:14:14.260 23:13:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:14.260 * Looking for test storage... 00:14:14.260 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.260 23:13:19 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.260 23:13:19 -- nvmf/common.sh@7 -- # uname -s 00:14:14.260 23:13:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.260 23:13:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.260 23:13:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.260 23:13:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.260 23:13:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.260 23:13:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.260 23:13:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.260 23:13:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.260 23:13:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.260 23:13:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.520 23:13:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:14.520 23:13:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:14.520 23:13:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.520 23:13:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.520 23:13:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.520 23:13:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:14.520 23:13:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.520 23:13:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.520 23:13:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.520 23:13:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.520 23:13:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.520 23:13:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.520 23:13:20 -- paths/export.sh@5 -- # export PATH 00:14:14.520 23:13:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.520 23:13:20 -- nvmf/common.sh@46 -- # : 0 00:14:14.520 23:13:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:14.520 23:13:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:14.520 23:13:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:14.520 23:13:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.520 23:13:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.520 23:13:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:14.520 23:13:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:14.520 23:13:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:14.520 23:13:20 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:14.520 23:13:20 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:14.520 23:13:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:14.520 23:13:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.520 23:13:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:14.520 23:13:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:14.520 23:13:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:14.520 23:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.520 23:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.520 23:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.520 23:13:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:14.520 23:13:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:14.520 23:13:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:14.520 23:13:20 -- common/autotest_common.sh@10 -- # set +x 00:14:21.092 23:13:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.092 23:13:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.092 23:13:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.092 23:13:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.092 23:13:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.092 23:13:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.092 23:13:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.092 23:13:25 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.092 23:13:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.092 23:13:25 -- nvmf/common.sh@295 -- # e810=() 00:14:21.092 23:13:25 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.092 23:13:25 -- nvmf/common.sh@296 -- # x722=() 00:14:21.092 23:13:25 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.092 23:13:25 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.092 23:13:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.092 23:13:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.092 23:13:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.092 23:13:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:21.092 23:13:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:21.092 23:13:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:21.092 23:13:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.092 23:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.092 23:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:21.092 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:21.092 23:13:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.092 23:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.092 23:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:21.092 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:21.092 23:13:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.092 23:13:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.092 23:13:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.092 23:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.092 23:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.092 23:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.092 23:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:21.092 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:21.092 23:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.092 23:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.092 23:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.092 23:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.092 23:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.092 23:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:21.092 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:21.092 23:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.092 23:13:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.092 23:13:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.092 23:13:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:21.092 23:13:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:21.092 23:13:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:21.092 23:13:25 -- nvmf/common.sh@57 -- # uname 00:14:21.092 23:13:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:21.092 23:13:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:21.092 23:13:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:21.092 23:13:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:21.092 23:13:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:21.092 23:13:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:21.092 23:13:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:21.092 23:13:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:21.092 23:13:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:21.092 23:13:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:21.092 23:13:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:21.092 23:13:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.092 23:13:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:21.092 23:13:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:21.092 23:13:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.092 23:13:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:21.092 23:13:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.092 23:13:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.092 23:13:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.092 23:13:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:21.092 23:13:26 -- nvmf/common.sh@104 -- # continue 2 00:14:21.092 23:13:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.092 23:13:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.092 23:13:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.092 23:13:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.092 23:13:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.092 23:13:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:21.092 23:13:26 -- nvmf/common.sh@104 -- # continue 2 00:14:21.092 23:13:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:21.092 23:13:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:21.092 23:13:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:21.092 23:13:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:21.092 23:13:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:21.092 23:13:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:21.092 23:13:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:21.092 23:13:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:21.092 23:13:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:21.092 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.092 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:21.092 altname enp217s0f0np0 00:14:21.092 altname ens818f0np0 00:14:21.092 inet 192.168.100.8/24 scope global mlx_0_0 00:14:21.092 valid_lft forever preferred_lft forever 00:14:21.092 23:13:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:21.092 23:13:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:21.092 23:13:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:21.092 23:13:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:21.092 23:13:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:21.092 23:13:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:21.092 23:13:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:21.092 23:13:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:21.092 23:13:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:21.092 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.092 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:21.092 altname enp217s0f1np1 00:14:21.092 altname ens818f1np1 00:14:21.092 inet 192.168.100.9/24 scope global mlx_0_1 00:14:21.092 valid_lft forever preferred_lft forever 00:14:21.092 23:13:26 -- nvmf/common.sh@410 -- # return 0 00:14:21.092 23:13:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.092 23:13:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:21.092 23:13:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:21.092 23:13:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:21.092 23:13:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:21.092 23:13:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.092 23:13:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:21.092 23:13:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:21.092 23:13:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.092 23:13:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:21.092 23:13:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.093 23:13:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.093 23:13:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.093 23:13:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:21.093 23:13:26 -- nvmf/common.sh@104 -- # continue 2 00:14:21.093 23:13:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.093 23:13:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.093 23:13:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.093 23:13:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.093 23:13:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.093 23:13:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:21.093 23:13:26 -- nvmf/common.sh@104 -- # continue 2 00:14:21.093 23:13:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:21.093 23:13:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:21.093 23:13:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:21.093 23:13:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:21.093 23:13:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:21.093 23:13:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:21.093 23:13:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:21.093 23:13:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:21.093 23:13:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:21.093 23:13:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:21.093 23:13:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:21.093 23:13:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:21.093 23:13:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:21.093 192.168.100.9' 00:14:21.093 23:13:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:21.093 192.168.100.9' 00:14:21.093 23:13:26 -- nvmf/common.sh@445 -- # head -n 1 00:14:21.093 23:13:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:21.093 23:13:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:21.093 192.168.100.9' 00:14:21.093 23:13:26 -- nvmf/common.sh@446 -- # tail -n +2 00:14:21.093 23:13:26 -- nvmf/common.sh@446 -- # head -n 1 00:14:21.093 23:13:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:21.093 23:13:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:21.093 23:13:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:21.093 23:13:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:21.093 23:13:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:21.093 23:13:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:21.093 23:13:26 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:21.093 23:13:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:21.093 23:13:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:21.093 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.093 23:13:26 -- nvmf/common.sh@469 -- # nvmfpid=554270 00:14:21.093 23:13:26 -- nvmf/common.sh@470 -- # waitforlisten 554270 00:14:21.093 23:13:26 -- common/autotest_common.sh@819 -- # '[' -z 554270 ']' 00:14:21.093 23:13:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.093 23:13:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.093 23:13:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.093 23:13:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.093 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.093 23:13:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.093 [2024-11-02 23:13:26.230708] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:21.093 [2024-11-02 23:13:26.230759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.093 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.093 [2024-11-02 23:13:26.302443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.093 [2024-11-02 23:13:26.376726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:21.093 [2024-11-02 23:13:26.376837] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.093 [2024-11-02 23:13:26.376847] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.093 [2024-11-02 23:13:26.376856] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.093 [2024-11-02 23:13:26.376900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.093 [2024-11-02 23:13:26.377014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.093 [2024-11-02 23:13:26.377038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.093 [2024-11-02 23:13:26.377040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.352 23:13:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:21.353 23:13:27 -- common/autotest_common.sh@852 -- # return 0 00:14:21.353 23:13:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:21.353 23:13:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:21.353 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:14:21.353 23:13:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.353 23:13:27 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:21.353 23:13:27 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:21.353 23:13:27 -- target/multitarget.sh@21 -- # jq length 00:14:21.613 23:13:27 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:21.613 23:13:27 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:21.613 "nvmf_tgt_1" 00:14:21.613 23:13:27 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:21.872 "nvmf_tgt_2" 00:14:21.872 23:13:27 -- target/multitarget.sh@28 -- # jq length 00:14:21.872 23:13:27 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:21.872 23:13:27 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:21.872 23:13:27 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:21.872 true 00:14:21.872 23:13:27 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:22.132 true 00:14:22.132 23:13:27 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:22.132 23:13:27 -- target/multitarget.sh@35 -- # jq length 00:14:22.132 23:13:27 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:22.132 23:13:27 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:22.132 23:13:27 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:22.132 23:13:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:22.132 23:13:27 -- nvmf/common.sh@116 -- # sync 00:14:22.132 23:13:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:22.132 23:13:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:22.132 23:13:27 -- nvmf/common.sh@119 -- # set +e 00:14:22.132 23:13:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:22.132 23:13:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:22.132 rmmod nvme_rdma 00:14:22.132 rmmod nvme_fabrics 00:14:22.132 23:13:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:22.132 23:13:27 -- nvmf/common.sh@123 -- # set -e 00:14:22.132 23:13:27 -- nvmf/common.sh@124 -- # return 0 00:14:22.132 23:13:27 -- nvmf/common.sh@477 -- # '[' -n 554270 ']' 00:14:22.132 23:13:27 -- nvmf/common.sh@478 -- # killprocess 554270 00:14:22.132 23:13:27 -- common/autotest_common.sh@926 -- # '[' -z 554270 ']' 00:14:22.132 23:13:27 -- common/autotest_common.sh@930 -- # kill -0 554270 00:14:22.132 23:13:27 -- common/autotest_common.sh@931 -- # uname 00:14:22.392 23:13:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:22.392 23:13:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 554270 00:14:22.392 23:13:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:22.392 23:13:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:22.392 23:13:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 554270' 00:14:22.392 killing process with pid 554270 00:14:22.392 23:13:27 -- common/autotest_common.sh@945 -- # kill 554270 00:14:22.392 23:13:27 -- common/autotest_common.sh@950 -- # wait 554270 00:14:22.652 23:13:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:22.652 23:13:28 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:22.652 00:14:22.652 real 0m8.227s 00:14:22.652 user 0m9.397s 00:14:22.652 sys 0m5.282s 00:14:22.652 23:13:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.652 23:13:28 -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 ************************************ 00:14:22.652 END TEST nvmf_multitarget 00:14:22.652 ************************************ 00:14:22.652 23:13:28 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:22.652 23:13:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:22.652 23:13:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:22.652 23:13:28 -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 ************************************ 00:14:22.652 START TEST nvmf_rpc 00:14:22.652 ************************************ 00:14:22.652 23:13:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:22.652 * Looking for test storage... 00:14:22.652 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:22.652 23:13:28 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.652 23:13:28 -- nvmf/common.sh@7 -- # uname -s 00:14:22.652 23:13:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.652 23:13:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.652 23:13:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.652 23:13:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.652 23:13:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.652 23:13:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.652 23:13:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.652 23:13:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.652 23:13:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.652 23:13:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.652 23:13:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:22.652 23:13:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:22.652 23:13:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.652 23:13:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.652 23:13:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.652 23:13:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:22.652 23:13:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.652 23:13:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.652 23:13:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.652 23:13:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.652 23:13:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.652 23:13:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.652 23:13:28 -- paths/export.sh@5 -- # export PATH 00:14:22.652 23:13:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.652 23:13:28 -- nvmf/common.sh@46 -- # : 0 00:14:22.652 23:13:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:22.652 23:13:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:22.652 23:13:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:22.652 23:13:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.652 23:13:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.652 23:13:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:22.652 23:13:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:22.652 23:13:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:22.652 23:13:28 -- target/rpc.sh@11 -- # loops=5 00:14:22.652 23:13:28 -- target/rpc.sh@23 -- # nvmftestinit 00:14:22.652 23:13:28 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:22.652 23:13:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.652 23:13:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:22.652 23:13:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:22.652 23:13:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:22.652 23:13:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.652 23:13:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.652 23:13:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.652 23:13:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:22.652 23:13:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:22.652 23:13:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:22.652 23:13:28 -- common/autotest_common.sh@10 -- # set +x 00:14:29.231 23:13:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:29.231 23:13:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:29.231 23:13:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:29.231 23:13:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:29.231 23:13:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:29.231 23:13:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:29.231 23:13:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:29.231 23:13:34 -- nvmf/common.sh@294 -- # net_devs=() 00:14:29.231 23:13:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:29.231 23:13:34 -- nvmf/common.sh@295 -- # e810=() 00:14:29.231 23:13:34 -- nvmf/common.sh@295 -- # local -ga e810 00:14:29.231 23:13:34 -- nvmf/common.sh@296 -- # x722=() 00:14:29.231 23:13:34 -- nvmf/common.sh@296 -- # local -ga x722 00:14:29.231 23:13:34 -- nvmf/common.sh@297 -- # mlx=() 00:14:29.231 23:13:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:29.231 23:13:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.231 23:13:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.232 23:13:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.232 23:13:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.232 23:13:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:29.232 23:13:34 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:29.232 23:13:34 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:29.232 23:13:34 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:29.232 23:13:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:29.232 23:13:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:29.232 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:29.232 23:13:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:29.232 23:13:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:29.232 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:29.232 23:13:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:29.232 23:13:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:29.232 23:13:34 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.232 23:13:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.232 23:13:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.232 23:13:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:29.232 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:29.232 23:13:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.232 23:13:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.232 23:13:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.232 23:13:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.232 23:13:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:29.232 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:29.232 23:13:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.232 23:13:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:29.232 23:13:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:29.232 23:13:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:29.232 23:13:34 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:29.232 23:13:34 -- nvmf/common.sh@57 -- # uname 00:14:29.232 23:13:34 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:29.232 23:13:34 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:29.232 23:13:34 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:29.232 23:13:34 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:29.232 23:13:34 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:29.232 23:13:34 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:29.232 23:13:34 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:29.232 23:13:34 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:29.232 23:13:34 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:29.232 23:13:34 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:29.232 23:13:34 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:29.232 23:13:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:29.232 23:13:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:29.232 23:13:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:29.232 23:13:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:29.232 23:13:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:29.232 23:13:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:29.232 23:13:34 -- nvmf/common.sh@104 -- # continue 2 00:14:29.232 23:13:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.232 23:13:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:29.232 23:13:34 -- nvmf/common.sh@104 -- # continue 2 00:14:29.232 23:13:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:29.232 23:13:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:29.232 23:13:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:29.232 23:13:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:29.232 23:13:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:29.232 23:13:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:29.232 23:13:34 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:29.232 23:13:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:29.232 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:29.232 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:29.232 altname enp217s0f0np0 00:14:29.232 altname ens818f0np0 00:14:29.232 inet 192.168.100.8/24 scope global mlx_0_0 00:14:29.232 valid_lft forever preferred_lft forever 00:14:29.232 23:13:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:29.232 23:13:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:29.232 23:13:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:29.232 23:13:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:29.232 23:13:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:29.232 23:13:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:29.232 23:13:34 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:29.232 23:13:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:29.232 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:29.232 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:29.232 altname enp217s0f1np1 00:14:29.232 altname ens818f1np1 00:14:29.232 inet 192.168.100.9/24 scope global mlx_0_1 00:14:29.232 valid_lft forever preferred_lft forever 00:14:29.232 23:13:34 -- nvmf/common.sh@410 -- # return 0 00:14:29.232 23:13:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:29.232 23:13:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:29.232 23:13:34 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:29.232 23:13:34 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:29.232 23:13:34 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:29.232 23:13:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:29.232 23:13:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:29.232 23:13:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:29.232 23:13:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:29.491 23:13:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:29.491 23:13:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:29.491 23:13:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.491 23:13:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:29.491 23:13:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:29.491 23:13:34 -- nvmf/common.sh@104 -- # continue 2 00:14:29.491 23:13:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:29.491 23:13:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.491 23:13:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:29.491 23:13:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.491 23:13:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:29.491 23:13:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:29.491 23:13:34 -- nvmf/common.sh@104 -- # continue 2 00:14:29.491 23:13:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:29.491 23:13:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:29.491 23:13:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:29.491 23:13:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:29.491 23:13:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:29.491 23:13:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:29.491 23:13:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:29.491 23:13:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:29.491 23:13:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:29.491 23:13:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:29.491 23:13:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:29.491 23:13:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:29.491 23:13:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:29.491 192.168.100.9' 00:14:29.491 23:13:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:29.491 192.168.100.9' 00:14:29.491 23:13:35 -- nvmf/common.sh@445 -- # head -n 1 00:14:29.491 23:13:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:29.491 23:13:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:29.491 192.168.100.9' 00:14:29.491 23:13:35 -- nvmf/common.sh@446 -- # tail -n +2 00:14:29.491 23:13:35 -- nvmf/common.sh@446 -- # head -n 1 00:14:29.491 23:13:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:29.491 23:13:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:29.491 23:13:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:29.491 23:13:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:29.491 23:13:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:29.491 23:13:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:29.491 23:13:35 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:29.491 23:13:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:29.491 23:13:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:29.491 23:13:35 -- common/autotest_common.sh@10 -- # set +x 00:14:29.491 23:13:35 -- nvmf/common.sh@469 -- # nvmfpid=557874 00:14:29.491 23:13:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.491 23:13:35 -- nvmf/common.sh@470 -- # waitforlisten 557874 00:14:29.491 23:13:35 -- common/autotest_common.sh@819 -- # '[' -z 557874 ']' 00:14:29.491 23:13:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.491 23:13:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:29.491 23:13:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.491 23:13:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:29.491 23:13:35 -- common/autotest_common.sh@10 -- # set +x 00:14:29.491 [2024-11-02 23:13:35.117830] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:29.491 [2024-11-02 23:13:35.117878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.491 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.492 [2024-11-02 23:13:35.186907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.750 [2024-11-02 23:13:35.256867] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:29.750 [2024-11-02 23:13:35.256989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.750 [2024-11-02 23:13:35.257000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.750 [2024-11-02 23:13:35.257008] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.751 [2024-11-02 23:13:35.257061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.751 [2024-11-02 23:13:35.257170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.751 [2024-11-02 23:13:35.257252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.751 [2024-11-02 23:13:35.257255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.319 23:13:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.319 23:13:35 -- common/autotest_common.sh@852 -- # return 0 00:14:30.319 23:13:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:30.319 23:13:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:30.319 23:13:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.319 23:13:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.319 23:13:36 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:30.319 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.319 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.319 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.319 23:13:36 -- target/rpc.sh@26 -- # stats='{ 00:14:30.319 "tick_rate": 2500000000, 00:14:30.319 "poll_groups": [ 00:14:30.319 { 00:14:30.319 "name": "nvmf_tgt_poll_group_0", 00:14:30.319 "admin_qpairs": 0, 00:14:30.319 "io_qpairs": 0, 00:14:30.319 "current_admin_qpairs": 0, 00:14:30.319 "current_io_qpairs": 0, 00:14:30.319 "pending_bdev_io": 0, 00:14:30.319 "completed_nvme_io": 0, 00:14:30.319 "transports": [] 00:14:30.319 }, 00:14:30.319 { 00:14:30.319 "name": "nvmf_tgt_poll_group_1", 00:14:30.319 "admin_qpairs": 0, 00:14:30.319 "io_qpairs": 0, 00:14:30.319 "current_admin_qpairs": 0, 00:14:30.319 "current_io_qpairs": 0, 00:14:30.319 "pending_bdev_io": 0, 00:14:30.319 "completed_nvme_io": 0, 00:14:30.319 "transports": [] 00:14:30.319 }, 00:14:30.319 { 00:14:30.319 "name": "nvmf_tgt_poll_group_2", 00:14:30.319 "admin_qpairs": 0, 00:14:30.319 "io_qpairs": 0, 00:14:30.319 "current_admin_qpairs": 0, 00:14:30.319 "current_io_qpairs": 0, 00:14:30.319 "pending_bdev_io": 0, 00:14:30.319 "completed_nvme_io": 0, 00:14:30.319 "transports": [] 00:14:30.319 }, 00:14:30.319 { 00:14:30.319 "name": "nvmf_tgt_poll_group_3", 00:14:30.319 "admin_qpairs": 0, 00:14:30.319 "io_qpairs": 0, 00:14:30.319 "current_admin_qpairs": 0, 00:14:30.319 "current_io_qpairs": 0, 00:14:30.319 "pending_bdev_io": 0, 00:14:30.319 "completed_nvme_io": 0, 00:14:30.319 "transports": [] 00:14:30.319 } 00:14:30.319 ] 00:14:30.319 }' 00:14:30.319 23:13:36 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:30.319 23:13:36 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:30.319 23:13:36 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:30.319 23:13:36 -- target/rpc.sh@15 -- # wc -l 00:14:30.319 23:13:36 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:30.319 23:13:36 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:30.579 23:13:36 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:30.579 23:13:36 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:30.579 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.579 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.579 [2024-11-02 23:13:36.137669] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e3b0a0/0x1e3f590) succeed. 00:14:30.579 [2024-11-02 23:13:36.148041] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e3c690/0x1e80c30) succeed. 00:14:30.579 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.579 23:13:36 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:30.579 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.579 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.579 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.579 23:13:36 -- target/rpc.sh@33 -- # stats='{ 00:14:30.579 "tick_rate": 2500000000, 00:14:30.579 "poll_groups": [ 00:14:30.579 { 00:14:30.579 "name": "nvmf_tgt_poll_group_0", 00:14:30.579 "admin_qpairs": 0, 00:14:30.579 "io_qpairs": 0, 00:14:30.579 "current_admin_qpairs": 0, 00:14:30.579 "current_io_qpairs": 0, 00:14:30.579 "pending_bdev_io": 0, 00:14:30.579 "completed_nvme_io": 0, 00:14:30.579 "transports": [ 00:14:30.579 { 00:14:30.579 "trtype": "RDMA", 00:14:30.579 "pending_data_buffer": 0, 00:14:30.579 "devices": [ 00:14:30.579 { 00:14:30.579 "name": "mlx5_0", 00:14:30.579 "polls": 16313, 00:14:30.579 "idle_polls": 16313, 00:14:30.579 "completions": 0, 00:14:30.579 "requests": 0, 00:14:30.579 "request_latency": 0, 00:14:30.579 "pending_free_request": 0, 00:14:30.579 "pending_rdma_read": 0, 00:14:30.579 "pending_rdma_write": 0, 00:14:30.579 "pending_rdma_send": 0, 00:14:30.579 "total_send_wrs": 0, 00:14:30.579 "send_doorbell_updates": 0, 00:14:30.579 "total_recv_wrs": 4096, 00:14:30.579 "recv_doorbell_updates": 1 00:14:30.579 }, 00:14:30.579 { 00:14:30.579 "name": "mlx5_1", 00:14:30.579 "polls": 16313, 00:14:30.579 "idle_polls": 16313, 00:14:30.579 "completions": 0, 00:14:30.579 "requests": 0, 00:14:30.579 "request_latency": 0, 00:14:30.579 "pending_free_request": 0, 00:14:30.579 "pending_rdma_read": 0, 00:14:30.579 "pending_rdma_write": 0, 00:14:30.579 "pending_rdma_send": 0, 00:14:30.579 "total_send_wrs": 0, 00:14:30.579 "send_doorbell_updates": 0, 00:14:30.579 "total_recv_wrs": 4096, 00:14:30.579 "recv_doorbell_updates": 1 00:14:30.579 } 00:14:30.579 ] 00:14:30.579 } 00:14:30.579 ] 00:14:30.579 }, 00:14:30.579 { 00:14:30.579 "name": "nvmf_tgt_poll_group_1", 00:14:30.579 "admin_qpairs": 0, 00:14:30.579 "io_qpairs": 0, 00:14:30.579 "current_admin_qpairs": 0, 00:14:30.579 "current_io_qpairs": 0, 00:14:30.579 "pending_bdev_io": 0, 00:14:30.579 "completed_nvme_io": 0, 00:14:30.579 "transports": [ 00:14:30.579 { 00:14:30.579 "trtype": "RDMA", 00:14:30.579 "pending_data_buffer": 0, 00:14:30.579 "devices": [ 00:14:30.579 { 00:14:30.579 "name": "mlx5_0", 00:14:30.579 "polls": 10341, 00:14:30.579 "idle_polls": 10341, 00:14:30.579 "completions": 0, 00:14:30.579 "requests": 0, 00:14:30.579 "request_latency": 0, 00:14:30.579 "pending_free_request": 0, 00:14:30.579 "pending_rdma_read": 0, 00:14:30.579 "pending_rdma_write": 0, 00:14:30.579 "pending_rdma_send": 0, 00:14:30.579 "total_send_wrs": 0, 00:14:30.579 "send_doorbell_updates": 0, 00:14:30.579 "total_recv_wrs": 4096, 00:14:30.579 "recv_doorbell_updates": 1 00:14:30.579 }, 00:14:30.579 { 00:14:30.579 "name": "mlx5_1", 00:14:30.579 "polls": 10341, 00:14:30.579 "idle_polls": 10341, 00:14:30.579 "completions": 0, 00:14:30.579 "requests": 0, 00:14:30.579 "request_latency": 0, 00:14:30.579 "pending_free_request": 0, 00:14:30.579 "pending_rdma_read": 0, 00:14:30.579 "pending_rdma_write": 0, 00:14:30.579 "pending_rdma_send": 0, 00:14:30.579 "total_send_wrs": 0, 00:14:30.579 "send_doorbell_updates": 0, 00:14:30.579 "total_recv_wrs": 4096, 00:14:30.579 "recv_doorbell_updates": 1 00:14:30.579 } 00:14:30.579 ] 00:14:30.579 } 00:14:30.579 ] 00:14:30.579 }, 00:14:30.579 { 00:14:30.579 "name": "nvmf_tgt_poll_group_2", 00:14:30.579 "admin_qpairs": 0, 00:14:30.579 "io_qpairs": 0, 00:14:30.579 "current_admin_qpairs": 0, 00:14:30.579 "current_io_qpairs": 0, 00:14:30.579 "pending_bdev_io": 0, 00:14:30.579 "completed_nvme_io": 0, 00:14:30.579 "transports": [ 00:14:30.579 { 00:14:30.579 "trtype": "RDMA", 00:14:30.579 "pending_data_buffer": 0, 00:14:30.579 "devices": [ 00:14:30.579 { 00:14:30.579 "name": "mlx5_0", 00:14:30.579 "polls": 5740, 00:14:30.579 "idle_polls": 5740, 00:14:30.579 "completions": 0, 00:14:30.579 "requests": 0, 00:14:30.579 "request_latency": 0, 00:14:30.579 "pending_free_request": 0, 00:14:30.579 "pending_rdma_read": 0, 00:14:30.579 "pending_rdma_write": 0, 00:14:30.579 "pending_rdma_send": 0, 00:14:30.579 "total_send_wrs": 0, 00:14:30.579 "send_doorbell_updates": 0, 00:14:30.579 "total_recv_wrs": 4096, 00:14:30.579 "recv_doorbell_updates": 1 00:14:30.579 }, 00:14:30.579 { 00:14:30.579 "name": "mlx5_1", 00:14:30.579 "polls": 5740, 00:14:30.579 "idle_polls": 5740, 00:14:30.579 "completions": 0, 00:14:30.579 "requests": 0, 00:14:30.579 "request_latency": 0, 00:14:30.579 "pending_free_request": 0, 00:14:30.579 "pending_rdma_read": 0, 00:14:30.579 "pending_rdma_write": 0, 00:14:30.579 "pending_rdma_send": 0, 00:14:30.579 "total_send_wrs": 0, 00:14:30.579 "send_doorbell_updates": 0, 00:14:30.580 "total_recv_wrs": 4096, 00:14:30.580 "recv_doorbell_updates": 1 00:14:30.580 } 00:14:30.580 ] 00:14:30.580 } 00:14:30.580 ] 00:14:30.580 }, 00:14:30.580 { 00:14:30.580 "name": "nvmf_tgt_poll_group_3", 00:14:30.580 "admin_qpairs": 0, 00:14:30.580 "io_qpairs": 0, 00:14:30.580 "current_admin_qpairs": 0, 00:14:30.580 "current_io_qpairs": 0, 00:14:30.580 "pending_bdev_io": 0, 00:14:30.580 "completed_nvme_io": 0, 00:14:30.580 "transports": [ 00:14:30.580 { 00:14:30.580 "trtype": "RDMA", 00:14:30.580 "pending_data_buffer": 0, 00:14:30.580 "devices": [ 00:14:30.580 { 00:14:30.580 "name": "mlx5_0", 00:14:30.580 "polls": 937, 00:14:30.580 "idle_polls": 937, 00:14:30.580 "completions": 0, 00:14:30.580 "requests": 0, 00:14:30.580 "request_latency": 0, 00:14:30.580 "pending_free_request": 0, 00:14:30.580 "pending_rdma_read": 0, 00:14:30.580 "pending_rdma_write": 0, 00:14:30.580 "pending_rdma_send": 0, 00:14:30.580 "total_send_wrs": 0, 00:14:30.580 "send_doorbell_updates": 0, 00:14:30.580 "total_recv_wrs": 4096, 00:14:30.580 "recv_doorbell_updates": 1 00:14:30.580 }, 00:14:30.580 { 00:14:30.580 "name": "mlx5_1", 00:14:30.580 "polls": 937, 00:14:30.580 "idle_polls": 937, 00:14:30.580 "completions": 0, 00:14:30.580 "requests": 0, 00:14:30.580 "request_latency": 0, 00:14:30.580 "pending_free_request": 0, 00:14:30.580 "pending_rdma_read": 0, 00:14:30.580 "pending_rdma_write": 0, 00:14:30.580 "pending_rdma_send": 0, 00:14:30.580 "total_send_wrs": 0, 00:14:30.580 "send_doorbell_updates": 0, 00:14:30.580 "total_recv_wrs": 4096, 00:14:30.580 "recv_doorbell_updates": 1 00:14:30.580 } 00:14:30.580 ] 00:14:30.580 } 00:14:30.580 ] 00:14:30.580 } 00:14:30.580 ] 00:14:30.580 }' 00:14:30.580 23:13:36 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:30.580 23:13:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:30.580 23:13:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:30.580 23:13:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.839 23:13:36 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:30.839 23:13:36 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:30.839 23:13:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:30.839 23:13:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:30.839 23:13:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.840 23:13:36 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:30.840 23:13:36 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:30.840 23:13:36 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:30.840 23:13:36 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:30.840 23:13:36 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:30.840 23:13:36 -- target/rpc.sh@15 -- # wc -l 00:14:30.840 23:13:36 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:30.840 23:13:36 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:30.840 23:13:36 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:30.840 23:13:36 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:30.840 23:13:36 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:30.840 23:13:36 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:30.840 23:13:36 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:30.840 23:13:36 -- target/rpc.sh@15 -- # wc -l 00:14:30.840 23:13:36 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:30.840 23:13:36 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:30.840 23:13:36 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:30.840 23:13:36 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:30.840 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.840 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 Malloc1 00:14:30.840 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.840 23:13:36 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.840 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.840 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.840 23:13:36 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.840 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.840 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.840 23:13:36 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:30.840 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.840 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.840 23:13:36 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:30.840 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.840 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 [2024-11-02 23:13:36.584181] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:30.840 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.840 23:13:36 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:30.840 23:13:36 -- common/autotest_common.sh@640 -- # local es=0 00:14:30.840 23:13:36 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:30.840 23:13:36 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:30.840 23:13:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:30.840 23:13:36 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:31.099 23:13:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.099 23:13:36 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:31.099 23:13:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.099 23:13:36 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:31.099 23:13:36 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:31.099 23:13:36 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:31.099 [2024-11-02 23:13:36.630055] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:31.099 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:31.099 could not add new controller: failed to write to nvme-fabrics device 00:14:31.099 23:13:36 -- common/autotest_common.sh@643 -- # es=1 00:14:31.099 23:13:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:31.099 23:13:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:31.099 23:13:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:31.099 23:13:36 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:31.099 23:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.099 23:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:31.099 23:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.099 23:13:36 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:32.037 23:13:37 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.037 23:13:37 -- common/autotest_common.sh@1177 -- # local i=0 00:14:32.037 23:13:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.037 23:13:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:32.037 23:13:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:33.942 23:13:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:33.942 23:13:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:33.942 23:13:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.942 23:13:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:33.942 23:13:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.942 23:13:39 -- common/autotest_common.sh@1187 -- # return 0 00:14:33.942 23:13:39 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.322 23:13:40 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:35.322 23:13:40 -- common/autotest_common.sh@1198 -- # local i=0 00:14:35.322 23:13:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:35.322 23:13:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.322 23:13:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:35.322 23:13:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.322 23:13:40 -- common/autotest_common.sh@1210 -- # return 0 00:14:35.322 23:13:40 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:35.322 23:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.322 23:13:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.322 23:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.322 23:13:40 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:35.322 23:13:40 -- common/autotest_common.sh@640 -- # local es=0 00:14:35.322 23:13:40 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:35.322 23:13:40 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:35.322 23:13:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.322 23:13:40 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:35.322 23:13:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.322 23:13:40 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:35.322 23:13:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.322 23:13:40 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:35.322 23:13:40 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:35.322 23:13:40 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:35.322 [2024-11-02 23:13:40.742166] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:35.322 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:35.322 could not add new controller: failed to write to nvme-fabrics device 00:14:35.322 23:13:40 -- common/autotest_common.sh@643 -- # es=1 00:14:35.322 23:13:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:35.322 23:13:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:35.322 23:13:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:35.322 23:13:40 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:35.322 23:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.322 23:13:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.322 23:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.322 23:13:40 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.259 23:13:41 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.259 23:13:41 -- common/autotest_common.sh@1177 -- # local i=0 00:14:36.259 23:13:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.259 23:13:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:36.259 23:13:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:38.170 23:13:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:38.170 23:13:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:38.170 23:13:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.171 23:13:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:38.171 23:13:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.171 23:13:43 -- common/autotest_common.sh@1187 -- # return 0 00:14:38.171 23:13:43 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.110 23:13:44 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.110 23:13:44 -- common/autotest_common.sh@1198 -- # local i=0 00:14:39.110 23:13:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:39.110 23:13:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.110 23:13:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:39.110 23:13:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.110 23:13:44 -- common/autotest_common.sh@1210 -- # return 0 00:14:39.110 23:13:44 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.110 23:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.110 23:13:44 -- common/autotest_common.sh@10 -- # set +x 00:14:39.110 23:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.110 23:13:44 -- target/rpc.sh@81 -- # seq 1 5 00:14:39.110 23:13:44 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:39.110 23:13:44 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.110 23:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.110 23:13:44 -- common/autotest_common.sh@10 -- # set +x 00:14:39.110 23:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.110 23:13:44 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.110 23:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.110 23:13:44 -- common/autotest_common.sh@10 -- # set +x 00:14:39.110 [2024-11-02 23:13:44.837770] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.110 23:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.110 23:13:44 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:39.110 23:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.110 23:13:44 -- common/autotest_common.sh@10 -- # set +x 00:14:39.110 23:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.110 23:13:44 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.110 23:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.110 23:13:44 -- common/autotest_common.sh@10 -- # set +x 00:14:39.110 23:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.110 23:13:44 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:40.491 23:13:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.491 23:13:45 -- common/autotest_common.sh@1177 -- # local i=0 00:14:40.491 23:13:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.491 23:13:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:40.491 23:13:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:42.398 23:13:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:42.398 23:13:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:42.398 23:13:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.398 23:13:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:42.398 23:13:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.398 23:13:47 -- common/autotest_common.sh@1187 -- # return 0 00:14:42.398 23:13:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.338 23:13:48 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.338 23:13:48 -- common/autotest_common.sh@1198 -- # local i=0 00:14:43.338 23:13:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:43.338 23:13:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.338 23:13:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:43.338 23:13:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.338 23:13:48 -- common/autotest_common.sh@1210 -- # return 0 00:14:43.338 23:13:48 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.338 23:13:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.338 23:13:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 23:13:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.338 23:13:48 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.338 23:13:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.338 23:13:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 23:13:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.338 23:13:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:43.338 23:13:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.338 23:13:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.338 23:13:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 23:13:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.338 23:13:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:43.338 23:13:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.338 23:13:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 [2024-11-02 23:13:48.904269] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:43.338 23:13:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.338 23:13:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:43.338 23:13:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.338 23:13:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 23:13:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.338 23:13:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.338 23:13:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.338 23:13:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 23:13:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.338 23:13:48 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:44.278 23:13:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:44.278 23:13:49 -- common/autotest_common.sh@1177 -- # local i=0 00:14:44.278 23:13:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.278 23:13:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:44.278 23:13:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:46.187 23:13:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:46.187 23:13:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:46.187 23:13:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.187 23:13:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:46.187 23:13:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.187 23:13:51 -- common/autotest_common.sh@1187 -- # return 0 00:14:46.187 23:13:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.387 23:13:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.387 23:13:52 -- common/autotest_common.sh@1198 -- # local i=0 00:14:47.387 23:13:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:47.387 23:13:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.387 23:13:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.387 23:13:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:47.387 23:13:52 -- common/autotest_common.sh@1210 -- # return 0 00:14:47.387 23:13:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.387 23:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.387 23:13:52 -- common/autotest_common.sh@10 -- # set +x 00:14:47.387 23:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.387 23:13:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.387 23:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.387 23:13:52 -- common/autotest_common.sh@10 -- # set +x 00:14:47.387 23:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.387 23:13:52 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:47.387 23:13:52 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.387 23:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.387 23:13:52 -- common/autotest_common.sh@10 -- # set +x 00:14:47.387 23:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.387 23:13:52 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:47.387 23:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.387 23:13:52 -- common/autotest_common.sh@10 -- # set +x 00:14:47.387 [2024-11-02 23:13:52.941504] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:47.387 23:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.387 23:13:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:47.387 23:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.387 23:13:52 -- common/autotest_common.sh@10 -- # set +x 00:14:47.387 23:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.387 23:13:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.387 23:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.387 23:13:52 -- common/autotest_common.sh@10 -- # set +x 00:14:47.387 23:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.387 23:13:52 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:48.326 23:13:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:48.326 23:13:53 -- common/autotest_common.sh@1177 -- # local i=0 00:14:48.326 23:13:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.326 23:13:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:48.326 23:13:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:50.290 23:13:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:50.290 23:13:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:50.290 23:13:55 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.290 23:13:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:50.290 23:13:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.290 23:13:55 -- common/autotest_common.sh@1187 -- # return 0 00:14:50.290 23:13:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.229 23:13:56 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.229 23:13:56 -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.229 23:13:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:51.229 23:13:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.229 23:13:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:51.229 23:13:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.229 23:13:56 -- common/autotest_common.sh@1210 -- # return 0 00:14:51.229 23:13:56 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.229 23:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.229 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:14:51.229 23:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.229 23:13:56 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.229 23:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.229 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:14:51.229 23:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.229 23:13:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:51.229 23:13:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.229 23:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.229 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:14:51.489 23:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.489 23:13:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.489 23:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.489 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:14:51.489 [2024-11-02 23:13:56.996748] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.489 23:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.489 23:13:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:51.489 23:13:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.489 23:13:57 -- common/autotest_common.sh@10 -- # set +x 00:14:51.489 23:13:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.489 23:13:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.489 23:13:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.489 23:13:57 -- common/autotest_common.sh@10 -- # set +x 00:14:51.489 23:13:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.489 23:13:57 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:52.427 23:13:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:52.427 23:13:57 -- common/autotest_common.sh@1177 -- # local i=0 00:14:52.427 23:13:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.427 23:13:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:52.427 23:13:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:54.335 23:13:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:54.335 23:13:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:54.335 23:13:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.335 23:14:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:54.335 23:14:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.335 23:14:00 -- common/autotest_common.sh@1187 -- # return 0 00:14:54.335 23:14:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.273 23:14:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.273 23:14:00 -- common/autotest_common.sh@1198 -- # local i=0 00:14:55.273 23:14:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:55.273 23:14:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.273 23:14:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:55.273 23:14:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.273 23:14:01 -- common/autotest_common.sh@1210 -- # return 0 00:14:55.273 23:14:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.273 23:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.273 23:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:55.273 23:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.273 23:14:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.273 23:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.273 23:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:55.531 23:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.532 23:14:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:55.532 23:14:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:55.532 23:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.532 23:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:55.532 23:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.532 23:14:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:55.532 23:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.532 23:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:55.532 [2024-11-02 23:14:01.041265] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:55.532 23:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.532 23:14:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:55.532 23:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.532 23:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:55.532 23:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.532 23:14:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:55.532 23:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.532 23:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:55.532 23:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.532 23:14:01 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.469 23:14:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.469 23:14:02 -- common/autotest_common.sh@1177 -- # local i=0 00:14:56.469 23:14:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.469 23:14:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:56.469 23:14:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:58.375 23:14:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:58.375 23:14:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:58.375 23:14:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.375 23:14:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:58.375 23:14:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.375 23:14:04 -- common/autotest_common.sh@1187 -- # return 0 00:14:58.375 23:14:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.314 23:14:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.314 23:14:05 -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.314 23:14:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:59.314 23:14:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.314 23:14:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:59.314 23:14:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.314 23:14:05 -- common/autotest_common.sh@1210 -- # return 0 00:14:59.314 23:14:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.314 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.314 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.314 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.314 23:14:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.314 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.314 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@99 -- # seq 1 5 00:14:59.575 23:14:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.575 23:14:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 [2024-11-02 23:14:05.093290] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.575 23:14:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 [2024-11-02 23:14:05.145459] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.575 23:14:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 [2024-11-02 23:14:05.193589] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.575 23:14:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.575 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.575 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.575 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.575 23:14:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 [2024-11-02 23:14:05.241771] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.576 23:14:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 [2024-11-02 23:14:05.293971] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.576 23:14:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.576 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.576 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.576 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.836 23:14:05 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:59.836 23:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.836 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:14:59.836 23:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.836 23:14:05 -- target/rpc.sh@110 -- # stats='{ 00:14:59.836 "tick_rate": 2500000000, 00:14:59.836 "poll_groups": [ 00:14:59.836 { 00:14:59.836 "name": "nvmf_tgt_poll_group_0", 00:14:59.836 "admin_qpairs": 2, 00:14:59.836 "io_qpairs": 27, 00:14:59.836 "current_admin_qpairs": 0, 00:14:59.836 "current_io_qpairs": 0, 00:14:59.836 "pending_bdev_io": 0, 00:14:59.836 "completed_nvme_io": 77, 00:14:59.836 "transports": [ 00:14:59.836 { 00:14:59.836 "trtype": "RDMA", 00:14:59.836 "pending_data_buffer": 0, 00:14:59.836 "devices": [ 00:14:59.836 { 00:14:59.836 "name": "mlx5_0", 00:14:59.836 "polls": 3495365, 00:14:59.836 "idle_polls": 3495121, 00:14:59.836 "completions": 263, 00:14:59.836 "requests": 131, 00:14:59.836 "request_latency": 20464438, 00:14:59.836 "pending_free_request": 0, 00:14:59.836 "pending_rdma_read": 0, 00:14:59.836 "pending_rdma_write": 0, 00:14:59.836 "pending_rdma_send": 0, 00:14:59.836 "total_send_wrs": 207, 00:14:59.836 "send_doorbell_updates": 122, 00:14:59.836 "total_recv_wrs": 4227, 00:14:59.836 "recv_doorbell_updates": 122 00:14:59.836 }, 00:14:59.836 { 00:14:59.836 "name": "mlx5_1", 00:14:59.836 "polls": 3495365, 00:14:59.836 "idle_polls": 3495365, 00:14:59.836 "completions": 0, 00:14:59.836 "requests": 0, 00:14:59.836 "request_latency": 0, 00:14:59.836 "pending_free_request": 0, 00:14:59.836 "pending_rdma_read": 0, 00:14:59.836 "pending_rdma_write": 0, 00:14:59.836 "pending_rdma_send": 0, 00:14:59.836 "total_send_wrs": 0, 00:14:59.836 "send_doorbell_updates": 0, 00:14:59.836 "total_recv_wrs": 4096, 00:14:59.836 "recv_doorbell_updates": 1 00:14:59.836 } 00:14:59.836 ] 00:14:59.836 } 00:14:59.836 ] 00:14:59.836 }, 00:14:59.836 { 00:14:59.836 "name": "nvmf_tgt_poll_group_1", 00:14:59.836 "admin_qpairs": 2, 00:14:59.836 "io_qpairs": 26, 00:14:59.836 "current_admin_qpairs": 0, 00:14:59.836 "current_io_qpairs": 0, 00:14:59.836 "pending_bdev_io": 0, 00:14:59.836 "completed_nvme_io": 127, 00:14:59.836 "transports": [ 00:14:59.836 { 00:14:59.836 "trtype": "RDMA", 00:14:59.836 "pending_data_buffer": 0, 00:14:59.836 "devices": [ 00:14:59.836 { 00:14:59.836 "name": "mlx5_0", 00:14:59.836 "polls": 3430397, 00:14:59.836 "idle_polls": 3430077, 00:14:59.836 "completions": 360, 00:14:59.836 "requests": 180, 00:14:59.836 "request_latency": 34570182, 00:14:59.836 "pending_free_request": 0, 00:14:59.836 "pending_rdma_read": 0, 00:14:59.836 "pending_rdma_write": 0, 00:14:59.836 "pending_rdma_send": 0, 00:14:59.836 "total_send_wrs": 306, 00:14:59.836 "send_doorbell_updates": 157, 00:14:59.836 "total_recv_wrs": 4276, 00:14:59.836 "recv_doorbell_updates": 158 00:14:59.836 }, 00:14:59.836 { 00:14:59.836 "name": "mlx5_1", 00:14:59.836 "polls": 3430397, 00:14:59.836 "idle_polls": 3430397, 00:14:59.836 "completions": 0, 00:14:59.836 "requests": 0, 00:14:59.836 "request_latency": 0, 00:14:59.836 "pending_free_request": 0, 00:14:59.836 "pending_rdma_read": 0, 00:14:59.836 "pending_rdma_write": 0, 00:14:59.836 "pending_rdma_send": 0, 00:14:59.836 "total_send_wrs": 0, 00:14:59.836 "send_doorbell_updates": 0, 00:14:59.836 "total_recv_wrs": 4096, 00:14:59.836 "recv_doorbell_updates": 1 00:14:59.836 } 00:14:59.836 ] 00:14:59.836 } 00:14:59.836 ] 00:14:59.836 }, 00:14:59.836 { 00:14:59.836 "name": "nvmf_tgt_poll_group_2", 00:14:59.836 "admin_qpairs": 1, 00:14:59.836 "io_qpairs": 26, 00:14:59.836 "current_admin_qpairs": 0, 00:14:59.836 "current_io_qpairs": 0, 00:14:59.836 "pending_bdev_io": 0, 00:14:59.836 "completed_nvme_io": 174, 00:14:59.836 "transports": [ 00:14:59.836 { 00:14:59.836 "trtype": "RDMA", 00:14:59.836 "pending_data_buffer": 0, 00:14:59.836 "devices": [ 00:14:59.836 { 00:14:59.836 "name": "mlx5_0", 00:14:59.836 "polls": 3446535, 00:14:59.836 "idle_polls": 3446190, 00:14:59.836 "completions": 403, 00:14:59.836 "requests": 201, 00:14:59.836 "request_latency": 45969596, 00:14:59.836 "pending_free_request": 0, 00:14:59.836 "pending_rdma_read": 0, 00:14:59.836 "pending_rdma_write": 0, 00:14:59.836 "pending_rdma_send": 0, 00:14:59.836 "total_send_wrs": 362, 00:14:59.836 "send_doorbell_updates": 169, 00:14:59.836 "total_recv_wrs": 4297, 00:14:59.836 "recv_doorbell_updates": 169 00:14:59.836 }, 00:14:59.836 { 00:14:59.836 "name": "mlx5_1", 00:14:59.836 "polls": 3446535, 00:14:59.836 "idle_polls": 3446535, 00:14:59.836 "completions": 0, 00:14:59.836 "requests": 0, 00:14:59.836 "request_latency": 0, 00:14:59.836 "pending_free_request": 0, 00:14:59.836 "pending_rdma_read": 0, 00:14:59.836 "pending_rdma_write": 0, 00:14:59.836 "pending_rdma_send": 0, 00:14:59.836 "total_send_wrs": 0, 00:14:59.836 "send_doorbell_updates": 0, 00:14:59.836 "total_recv_wrs": 4096, 00:14:59.836 "recv_doorbell_updates": 1 00:14:59.836 } 00:14:59.837 ] 00:14:59.837 } 00:14:59.837 ] 00:14:59.837 }, 00:14:59.837 { 00:14:59.837 "name": "nvmf_tgt_poll_group_3", 00:14:59.837 "admin_qpairs": 2, 00:14:59.837 "io_qpairs": 26, 00:14:59.837 "current_admin_qpairs": 0, 00:14:59.837 "current_io_qpairs": 0, 00:14:59.837 "pending_bdev_io": 0, 00:14:59.837 "completed_nvme_io": 77, 00:14:59.837 "transports": [ 00:14:59.837 { 00:14:59.837 "trtype": "RDMA", 00:14:59.837 "pending_data_buffer": 0, 00:14:59.837 "devices": [ 00:14:59.837 { 00:14:59.837 "name": "mlx5_0", 00:14:59.837 "polls": 2718021, 00:14:59.837 "idle_polls": 2717782, 00:14:59.837 "completions": 260, 00:14:59.837 "requests": 130, 00:14:59.837 "request_latency": 22455740, 00:14:59.837 "pending_free_request": 0, 00:14:59.837 "pending_rdma_read": 0, 00:14:59.837 "pending_rdma_write": 0, 00:14:59.837 "pending_rdma_send": 0, 00:14:59.837 "total_send_wrs": 206, 00:14:59.837 "send_doorbell_updates": 117, 00:14:59.837 "total_recv_wrs": 4226, 00:14:59.837 "recv_doorbell_updates": 118 00:14:59.837 }, 00:14:59.837 { 00:14:59.837 "name": "mlx5_1", 00:14:59.837 "polls": 2718021, 00:14:59.837 "idle_polls": 2718021, 00:14:59.837 "completions": 0, 00:14:59.837 "requests": 0, 00:14:59.837 "request_latency": 0, 00:14:59.837 "pending_free_request": 0, 00:14:59.837 "pending_rdma_read": 0, 00:14:59.837 "pending_rdma_write": 0, 00:14:59.837 "pending_rdma_send": 0, 00:14:59.837 "total_send_wrs": 0, 00:14:59.837 "send_doorbell_updates": 0, 00:14:59.837 "total_recv_wrs": 4096, 00:14:59.837 "recv_doorbell_updates": 1 00:14:59.837 } 00:14:59.837 ] 00:14:59.837 } 00:14:59.837 ] 00:14:59.837 } 00:14:59.837 ] 00:14:59.837 }' 00:14:59.837 23:14:05 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:59.837 23:14:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.837 23:14:05 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:59.837 23:14:05 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:59.837 23:14:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.837 23:14:05 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:14:59.837 23:14:05 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:14:59.837 23:14:05 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:14:59.837 23:14:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.837 23:14:05 -- target/rpc.sh@117 -- # (( 1286 > 0 )) 00:14:59.837 23:14:05 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:14:59.837 23:14:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:14:59.837 23:14:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.837 23:14:05 -- target/rpc.sh@118 -- # (( 123459956 > 0 )) 00:14:59.837 23:14:05 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:59.837 23:14:05 -- target/rpc.sh@123 -- # nvmftestfini 00:14:59.837 23:14:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:59.837 23:14:05 -- nvmf/common.sh@116 -- # sync 00:14:59.837 23:14:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:59.837 23:14:05 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:59.837 23:14:05 -- nvmf/common.sh@119 -- # set +e 00:14:59.837 23:14:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:59.837 23:14:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:59.837 rmmod nvme_rdma 00:14:59.837 rmmod nvme_fabrics 00:15:00.097 23:14:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:00.097 23:14:05 -- nvmf/common.sh@123 -- # set -e 00:15:00.097 23:14:05 -- nvmf/common.sh@124 -- # return 0 00:15:00.097 23:14:05 -- nvmf/common.sh@477 -- # '[' -n 557874 ']' 00:15:00.097 23:14:05 -- nvmf/common.sh@478 -- # killprocess 557874 00:15:00.097 23:14:05 -- common/autotest_common.sh@926 -- # '[' -z 557874 ']' 00:15:00.097 23:14:05 -- common/autotest_common.sh@930 -- # kill -0 557874 00:15:00.097 23:14:05 -- common/autotest_common.sh@931 -- # uname 00:15:00.097 23:14:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:00.097 23:14:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 557874 00:15:00.097 23:14:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:00.097 23:14:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:00.097 23:14:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 557874' 00:15:00.097 killing process with pid 557874 00:15:00.097 23:14:05 -- common/autotest_common.sh@945 -- # kill 557874 00:15:00.097 23:14:05 -- common/autotest_common.sh@950 -- # wait 557874 00:15:00.357 23:14:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:00.357 23:14:05 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:00.357 00:15:00.357 real 0m37.776s 00:15:00.357 user 2m4.589s 00:15:00.357 sys 0m6.843s 00:15:00.357 23:14:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.357 23:14:05 -- common/autotest_common.sh@10 -- # set +x 00:15:00.357 ************************************ 00:15:00.357 END TEST nvmf_rpc 00:15:00.357 ************************************ 00:15:00.357 23:14:06 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:00.357 23:14:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.357 23:14:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.357 23:14:06 -- common/autotest_common.sh@10 -- # set +x 00:15:00.357 ************************************ 00:15:00.357 START TEST nvmf_invalid 00:15:00.357 ************************************ 00:15:00.357 23:14:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:00.617 * Looking for test storage... 00:15:00.617 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:00.617 23:14:06 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.617 23:14:06 -- nvmf/common.sh@7 -- # uname -s 00:15:00.617 23:14:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.617 23:14:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.617 23:14:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.617 23:14:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.617 23:14:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.617 23:14:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.617 23:14:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.617 23:14:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.617 23:14:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.617 23:14:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.617 23:14:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:00.617 23:14:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:00.617 23:14:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.617 23:14:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.617 23:14:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.617 23:14:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:00.617 23:14:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.617 23:14:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.617 23:14:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.618 23:14:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.618 23:14:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.618 23:14:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.618 23:14:06 -- paths/export.sh@5 -- # export PATH 00:15:00.618 23:14:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.618 23:14:06 -- nvmf/common.sh@46 -- # : 0 00:15:00.618 23:14:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:00.618 23:14:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:00.618 23:14:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:00.618 23:14:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.618 23:14:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.618 23:14:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:00.618 23:14:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:00.618 23:14:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:00.618 23:14:06 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:00.618 23:14:06 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:00.618 23:14:06 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:00.618 23:14:06 -- target/invalid.sh@14 -- # target=foobar 00:15:00.618 23:14:06 -- target/invalid.sh@16 -- # RANDOM=0 00:15:00.618 23:14:06 -- target/invalid.sh@34 -- # nvmftestinit 00:15:00.618 23:14:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:00.618 23:14:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.618 23:14:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:00.618 23:14:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:00.618 23:14:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:00.618 23:14:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.618 23:14:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.618 23:14:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.618 23:14:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:00.618 23:14:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:00.618 23:14:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:00.618 23:14:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.194 23:14:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:07.194 23:14:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:07.194 23:14:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:07.194 23:14:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:07.194 23:14:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:07.194 23:14:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:07.194 23:14:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:07.194 23:14:11 -- nvmf/common.sh@294 -- # net_devs=() 00:15:07.194 23:14:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:07.194 23:14:11 -- nvmf/common.sh@295 -- # e810=() 00:15:07.194 23:14:11 -- nvmf/common.sh@295 -- # local -ga e810 00:15:07.194 23:14:11 -- nvmf/common.sh@296 -- # x722=() 00:15:07.194 23:14:11 -- nvmf/common.sh@296 -- # local -ga x722 00:15:07.194 23:14:11 -- nvmf/common.sh@297 -- # mlx=() 00:15:07.194 23:14:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:07.194 23:14:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.194 23:14:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:07.194 23:14:11 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:07.194 23:14:11 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:07.194 23:14:11 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:07.194 23:14:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:07.194 23:14:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:07.194 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:07.194 23:14:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:07.194 23:14:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:07.194 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:07.194 23:14:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:07.194 23:14:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:07.194 23:14:11 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.194 23:14:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.194 23:14:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.194 23:14:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:07.194 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:07.194 23:14:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.194 23:14:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.194 23:14:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.194 23:14:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.194 23:14:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:07.194 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:07.194 23:14:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.194 23:14:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:07.194 23:14:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:07.194 23:14:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:07.194 23:14:11 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:07.194 23:14:11 -- nvmf/common.sh@57 -- # uname 00:15:07.194 23:14:11 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:07.194 23:14:11 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:07.194 23:14:11 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:07.194 23:14:11 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:07.194 23:14:11 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:07.194 23:14:11 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:07.194 23:14:11 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:07.194 23:14:11 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:07.194 23:14:11 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:07.194 23:14:11 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:07.194 23:14:11 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:07.194 23:14:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:07.194 23:14:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:07.194 23:14:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:07.194 23:14:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:07.194 23:14:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:07.194 23:14:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:07.194 23:14:11 -- nvmf/common.sh@104 -- # continue 2 00:15:07.194 23:14:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.194 23:14:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:07.194 23:14:11 -- nvmf/common.sh@104 -- # continue 2 00:15:07.194 23:14:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:07.194 23:14:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:07.194 23:14:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:07.194 23:14:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:07.194 23:14:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:07.194 23:14:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:07.194 23:14:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:07.194 23:14:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:07.194 23:14:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:07.194 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:07.194 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:07.194 altname enp217s0f0np0 00:15:07.194 altname ens818f0np0 00:15:07.194 inet 192.168.100.8/24 scope global mlx_0_0 00:15:07.194 valid_lft forever preferred_lft forever 00:15:07.194 23:14:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:07.194 23:14:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:07.195 23:14:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:07.195 23:14:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:07.195 23:14:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:07.195 23:14:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:07.195 23:14:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:07.195 23:14:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:07.195 23:14:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:07.195 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:07.195 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:07.195 altname enp217s0f1np1 00:15:07.195 altname ens818f1np1 00:15:07.195 inet 192.168.100.9/24 scope global mlx_0_1 00:15:07.195 valid_lft forever preferred_lft forever 00:15:07.195 23:14:11 -- nvmf/common.sh@410 -- # return 0 00:15:07.195 23:14:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:07.195 23:14:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:07.195 23:14:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:07.195 23:14:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:07.195 23:14:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:07.195 23:14:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:07.195 23:14:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:07.195 23:14:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:07.195 23:14:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:07.195 23:14:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:07.195 23:14:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:07.195 23:14:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.195 23:14:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:07.195 23:14:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:07.195 23:14:12 -- nvmf/common.sh@104 -- # continue 2 00:15:07.195 23:14:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:07.195 23:14:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.195 23:14:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:07.195 23:14:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.195 23:14:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:07.195 23:14:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:07.195 23:14:12 -- nvmf/common.sh@104 -- # continue 2 00:15:07.195 23:14:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:07.195 23:14:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:07.195 23:14:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:07.195 23:14:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:07.195 23:14:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:07.195 23:14:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:07.195 23:14:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:07.195 23:14:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:07.195 23:14:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:07.195 23:14:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:07.195 23:14:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:07.195 23:14:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:07.195 23:14:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:07.195 192.168.100.9' 00:15:07.195 23:14:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:07.195 192.168.100.9' 00:15:07.195 23:14:12 -- nvmf/common.sh@445 -- # head -n 1 00:15:07.195 23:14:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:07.195 23:14:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:07.195 192.168.100.9' 00:15:07.195 23:14:12 -- nvmf/common.sh@446 -- # tail -n +2 00:15:07.195 23:14:12 -- nvmf/common.sh@446 -- # head -n 1 00:15:07.195 23:14:12 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:07.195 23:14:12 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:07.195 23:14:12 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:07.195 23:14:12 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:07.195 23:14:12 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:07.195 23:14:12 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:07.195 23:14:12 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:07.195 23:14:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.195 23:14:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:07.195 23:14:12 -- common/autotest_common.sh@10 -- # set +x 00:15:07.195 23:14:12 -- nvmf/common.sh@469 -- # nvmfpid=566497 00:15:07.195 23:14:12 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:07.195 23:14:12 -- nvmf/common.sh@470 -- # waitforlisten 566497 00:15:07.195 23:14:12 -- common/autotest_common.sh@819 -- # '[' -z 566497 ']' 00:15:07.195 23:14:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.195 23:14:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.195 23:14:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.195 23:14:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.195 23:14:12 -- common/autotest_common.sh@10 -- # set +x 00:15:07.195 [2024-11-02 23:14:12.148272] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:07.195 [2024-11-02 23:14:12.148323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.195 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.195 [2024-11-02 23:14:12.217364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:07.195 [2024-11-02 23:14:12.285390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.195 [2024-11-02 23:14:12.285528] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.195 [2024-11-02 23:14:12.285537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.195 [2024-11-02 23:14:12.285546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.195 [2024-11-02 23:14:12.285595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.195 [2024-11-02 23:14:12.285615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.195 [2024-11-02 23:14:12.285700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.195 [2024-11-02 23:14:12.285702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.455 23:14:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:07.455 23:14:12 -- common/autotest_common.sh@852 -- # return 0 00:15:07.455 23:14:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:07.455 23:14:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:07.455 23:14:12 -- common/autotest_common.sh@10 -- # set +x 00:15:07.455 23:14:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.455 23:14:13 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:07.455 23:14:13 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11094 00:15:07.455 [2024-11-02 23:14:13.174800] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:07.455 23:14:13 -- target/invalid.sh@40 -- # out='request: 00:15:07.455 { 00:15:07.455 "nqn": "nqn.2016-06.io.spdk:cnode11094", 00:15:07.455 "tgt_name": "foobar", 00:15:07.455 "method": "nvmf_create_subsystem", 00:15:07.455 "req_id": 1 00:15:07.455 } 00:15:07.455 Got JSON-RPC error response 00:15:07.455 response: 00:15:07.455 { 00:15:07.455 "code": -32603, 00:15:07.455 "message": "Unable to find target foobar" 00:15:07.455 }' 00:15:07.455 23:14:13 -- target/invalid.sh@41 -- # [[ request: 00:15:07.455 { 00:15:07.455 "nqn": "nqn.2016-06.io.spdk:cnode11094", 00:15:07.455 "tgt_name": "foobar", 00:15:07.455 "method": "nvmf_create_subsystem", 00:15:07.455 "req_id": 1 00:15:07.455 } 00:15:07.455 Got JSON-RPC error response 00:15:07.455 response: 00:15:07.455 { 00:15:07.455 "code": -32603, 00:15:07.455 "message": "Unable to find target foobar" 00:15:07.455 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:07.455 23:14:13 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:07.714 23:14:13 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9180 00:15:07.714 [2024-11-02 23:14:13.367510] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9180: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:07.714 23:14:13 -- target/invalid.sh@45 -- # out='request: 00:15:07.714 { 00:15:07.714 "nqn": "nqn.2016-06.io.spdk:cnode9180", 00:15:07.714 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:07.714 "method": "nvmf_create_subsystem", 00:15:07.714 "req_id": 1 00:15:07.714 } 00:15:07.714 Got JSON-RPC error response 00:15:07.714 response: 00:15:07.714 { 00:15:07.714 "code": -32602, 00:15:07.714 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:07.714 }' 00:15:07.714 23:14:13 -- target/invalid.sh@46 -- # [[ request: 00:15:07.714 { 00:15:07.714 "nqn": "nqn.2016-06.io.spdk:cnode9180", 00:15:07.714 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:07.714 "method": "nvmf_create_subsystem", 00:15:07.714 "req_id": 1 00:15:07.714 } 00:15:07.714 Got JSON-RPC error response 00:15:07.714 response: 00:15:07.714 { 00:15:07.714 "code": -32602, 00:15:07.714 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:07.714 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:07.714 23:14:13 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:07.714 23:14:13 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31408 00:15:07.974 [2024-11-02 23:14:13.568124] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31408: invalid model number 'SPDK_Controller' 00:15:07.974 23:14:13 -- target/invalid.sh@50 -- # out='request: 00:15:07.974 { 00:15:07.974 "nqn": "nqn.2016-06.io.spdk:cnode31408", 00:15:07.974 "model_number": "SPDK_Controller\u001f", 00:15:07.974 "method": "nvmf_create_subsystem", 00:15:07.974 "req_id": 1 00:15:07.974 } 00:15:07.974 Got JSON-RPC error response 00:15:07.974 response: 00:15:07.974 { 00:15:07.974 "code": -32602, 00:15:07.974 "message": "Invalid MN SPDK_Controller\u001f" 00:15:07.974 }' 00:15:07.974 23:14:13 -- target/invalid.sh@51 -- # [[ request: 00:15:07.974 { 00:15:07.974 "nqn": "nqn.2016-06.io.spdk:cnode31408", 00:15:07.974 "model_number": "SPDK_Controller\u001f", 00:15:07.974 "method": "nvmf_create_subsystem", 00:15:07.974 "req_id": 1 00:15:07.974 } 00:15:07.974 Got JSON-RPC error response 00:15:07.974 response: 00:15:07.974 { 00:15:07.974 "code": -32602, 00:15:07.974 "message": "Invalid MN SPDK_Controller\u001f" 00:15:07.974 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:07.974 23:14:13 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:07.974 23:14:13 -- target/invalid.sh@19 -- # local length=21 ll 00:15:07.974 23:14:13 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:07.974 23:14:13 -- target/invalid.sh@21 -- # local chars 00:15:07.974 23:14:13 -- target/invalid.sh@22 -- # local string 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # printf %x 85 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # string+=U 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # printf %x 105 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # string+=i 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # printf %x 54 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # string+=6 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.974 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # printf %x 82 00:15:07.974 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=R 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 62 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+='>' 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 81 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=Q 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 101 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=e 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 86 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=V 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 109 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=m 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 74 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=J 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 100 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=d 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 34 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+='"' 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 49 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=1 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 105 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=i 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 127 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 87 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # string+=W 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.975 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.975 23:14:13 -- target/invalid.sh@25 -- # printf %x 60 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # string+='<' 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # printf %x 88 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # string+=X 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # printf %x 61 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # string+== 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # printf %x 125 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # string+='}' 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # printf %x 36 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:08.235 23:14:13 -- target/invalid.sh@25 -- # string+='$' 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.235 23:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.235 23:14:13 -- target/invalid.sh@28 -- # [[ U == \- ]] 00:15:08.235 23:14:13 -- target/invalid.sh@31 -- # echo 'Ui6R>QeVmJd"1iWQeVmJd"1iWQeVmJd"1iWQeVmJd\"1i\u007fWQeVmJd\"1i\u007fW /dev/null' 00:15:11.094 23:14:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.094 23:14:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:11.094 23:14:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:11.094 23:14:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:11.094 23:14:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.668 23:14:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:17.668 23:14:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:17.668 23:14:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:17.668 23:14:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:17.668 23:14:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:17.668 23:14:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:17.668 23:14:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:17.668 23:14:22 -- nvmf/common.sh@294 -- # net_devs=() 00:15:17.669 23:14:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:17.669 23:14:22 -- nvmf/common.sh@295 -- # e810=() 00:15:17.669 23:14:22 -- nvmf/common.sh@295 -- # local -ga e810 00:15:17.669 23:14:22 -- nvmf/common.sh@296 -- # x722=() 00:15:17.669 23:14:22 -- nvmf/common.sh@296 -- # local -ga x722 00:15:17.669 23:14:22 -- nvmf/common.sh@297 -- # mlx=() 00:15:17.669 23:14:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:17.669 23:14:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.669 23:14:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:17.669 23:14:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:17.669 23:14:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:17.669 23:14:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:17.669 23:14:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:17.669 23:14:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:17.669 23:14:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:17.669 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:17.669 23:14:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.669 23:14:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:17.669 23:14:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:17.669 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:17.669 23:14:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.669 23:14:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:17.669 23:14:22 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:17.669 23:14:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.669 23:14:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:17.669 23:14:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.669 23:14:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:17.669 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:17.669 23:14:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.669 23:14:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:17.669 23:14:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.669 23:14:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:17.669 23:14:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.669 23:14:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:17.669 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:17.669 23:14:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.669 23:14:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:17.669 23:14:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:17.669 23:14:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:17.669 23:14:22 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:17.669 23:14:22 -- nvmf/common.sh@57 -- # uname 00:15:17.669 23:14:22 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:17.669 23:14:22 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:17.669 23:14:22 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:17.669 23:14:22 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:17.669 23:14:22 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:17.669 23:14:22 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:17.669 23:14:22 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:17.669 23:14:22 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:17.669 23:14:22 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:17.669 23:14:22 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:17.669 23:14:22 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:17.669 23:14:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.669 23:14:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:17.669 23:14:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:17.669 23:14:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.669 23:14:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:17.669 23:14:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.669 23:14:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.669 23:14:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.669 23:14:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:17.669 23:14:22 -- nvmf/common.sh@104 -- # continue 2 00:15:17.669 23:14:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@104 -- # continue 2 00:15:17.669 23:14:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:17.669 23:14:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:17.669 23:14:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.669 23:14:23 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:17.669 23:14:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:17.669 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.669 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:17.669 altname enp217s0f0np0 00:15:17.669 altname ens818f0np0 00:15:17.669 inet 192.168.100.8/24 scope global mlx_0_0 00:15:17.669 valid_lft forever preferred_lft forever 00:15:17.669 23:14:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:17.669 23:14:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.669 23:14:23 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:17.669 23:14:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:17.669 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.669 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:17.669 altname enp217s0f1np1 00:15:17.669 altname ens818f1np1 00:15:17.669 inet 192.168.100.9/24 scope global mlx_0_1 00:15:17.669 valid_lft forever preferred_lft forever 00:15:17.669 23:14:23 -- nvmf/common.sh@410 -- # return 0 00:15:17.669 23:14:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.669 23:14:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:17.669 23:14:23 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:17.669 23:14:23 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:17.669 23:14:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.669 23:14:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:17.669 23:14:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:17.669 23:14:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.669 23:14:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:17.669 23:14:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:17.669 23:14:23 -- nvmf/common.sh@104 -- # continue 2 00:15:17.669 23:14:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.669 23:14:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.669 23:14:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@104 -- # continue 2 00:15:17.669 23:14:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:17.669 23:14:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:17.669 23:14:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.669 23:14:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:17.669 23:14:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:17.669 23:14:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.670 23:14:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.670 23:14:23 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:17.670 192.168.100.9' 00:15:17.670 23:14:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:17.670 192.168.100.9' 00:15:17.670 23:14:23 -- nvmf/common.sh@445 -- # head -n 1 00:15:17.670 23:14:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:17.670 23:14:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:17.670 192.168.100.9' 00:15:17.670 23:14:23 -- nvmf/common.sh@446 -- # tail -n +2 00:15:17.670 23:14:23 -- nvmf/common.sh@446 -- # head -n 1 00:15:17.670 23:14:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:17.670 23:14:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:17.670 23:14:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:17.670 23:14:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:17.670 23:14:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:17.670 23:14:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:17.670 23:14:23 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:17.670 23:14:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.670 23:14:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:17.670 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:17.670 23:14:23 -- nvmf/common.sh@469 -- # nvmfpid=570706 00:15:17.670 23:14:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:17.670 23:14:23 -- nvmf/common.sh@470 -- # waitforlisten 570706 00:15:17.670 23:14:23 -- common/autotest_common.sh@819 -- # '[' -z 570706 ']' 00:15:17.670 23:14:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.670 23:14:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.670 23:14:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.670 23:14:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.670 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:17.670 [2024-11-02 23:14:23.210875] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:17.670 [2024-11-02 23:14:23.210926] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.670 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.670 [2024-11-02 23:14:23.281440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:17.670 [2024-11-02 23:14:23.351029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.670 [2024-11-02 23:14:23.351146] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.670 [2024-11-02 23:14:23.351156] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.670 [2024-11-02 23:14:23.351164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.670 [2024-11-02 23:14:23.351266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.670 [2024-11-02 23:14:23.351351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.670 [2024-11-02 23:14:23.351353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.608 23:14:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.608 23:14:24 -- common/autotest_common.sh@852 -- # return 0 00:15:18.608 23:14:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.608 23:14:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 23:14:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.608 23:14:24 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:18.608 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 [2024-11-02 23:14:24.100762] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd5e860/0xd62d50) succeed. 00:15:18.608 [2024-11-02 23:14:24.109818] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd5fdb0/0xda43f0) succeed. 00:15:18.608 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.608 23:14:24 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:18.608 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 Malloc0 00:15:18.608 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.608 23:14:24 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:18.608 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 Delay0 00:15:18.608 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.608 23:14:24 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:18.608 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.608 23:14:24 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:18.608 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.608 23:14:24 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:18.608 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 [2024-11-02 23:14:24.263166] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:18.608 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.608 23:14:24 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:18.608 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.608 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:18.608 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.608 23:14:24 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:18.609 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.609 [2024-11-02 23:14:24.355884] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:21.145 Initializing NVMe Controllers 00:15:21.145 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:21.145 controller IO queue size 128 less than required 00:15:21.145 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:21.145 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:21.145 Initialization complete. Launching workers. 00:15:21.145 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51775 00:15:21.145 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51836, failed to submit 62 00:15:21.145 success 51775, unsuccess 61, failed 0 00:15:21.145 23:14:26 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:21.145 23:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.145 23:14:26 -- common/autotest_common.sh@10 -- # set +x 00:15:21.145 23:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.145 23:14:26 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:21.145 23:14:26 -- target/abort.sh@38 -- # nvmftestfini 00:15:21.145 23:14:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:21.145 23:14:26 -- nvmf/common.sh@116 -- # sync 00:15:21.145 23:14:26 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:21.145 23:14:26 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:21.145 23:14:26 -- nvmf/common.sh@119 -- # set +e 00:15:21.145 23:14:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:21.145 23:14:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:21.145 rmmod nvme_rdma 00:15:21.145 rmmod nvme_fabrics 00:15:21.145 23:14:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:21.145 23:14:26 -- nvmf/common.sh@123 -- # set -e 00:15:21.145 23:14:26 -- nvmf/common.sh@124 -- # return 0 00:15:21.145 23:14:26 -- nvmf/common.sh@477 -- # '[' -n 570706 ']' 00:15:21.145 23:14:26 -- nvmf/common.sh@478 -- # killprocess 570706 00:15:21.145 23:14:26 -- common/autotest_common.sh@926 -- # '[' -z 570706 ']' 00:15:21.145 23:14:26 -- common/autotest_common.sh@930 -- # kill -0 570706 00:15:21.145 23:14:26 -- common/autotest_common.sh@931 -- # uname 00:15:21.145 23:14:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:21.145 23:14:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 570706 00:15:21.145 23:14:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:21.145 23:14:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:21.145 23:14:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 570706' 00:15:21.145 killing process with pid 570706 00:15:21.145 23:14:26 -- common/autotest_common.sh@945 -- # kill 570706 00:15:21.145 23:14:26 -- common/autotest_common.sh@950 -- # wait 570706 00:15:21.145 23:14:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:21.145 23:14:26 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:21.145 00:15:21.145 real 0m10.207s 00:15:21.145 user 0m14.312s 00:15:21.145 sys 0m5.310s 00:15:21.145 23:14:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.145 23:14:26 -- common/autotest_common.sh@10 -- # set +x 00:15:21.145 ************************************ 00:15:21.145 END TEST nvmf_abort 00:15:21.145 ************************************ 00:15:21.404 23:14:26 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:21.404 23:14:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:21.404 23:14:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:21.404 23:14:26 -- common/autotest_common.sh@10 -- # set +x 00:15:21.404 ************************************ 00:15:21.404 START TEST nvmf_ns_hotplug_stress 00:15:21.404 ************************************ 00:15:21.404 23:14:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:21.404 * Looking for test storage... 00:15:21.404 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:21.404 23:14:27 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.404 23:14:27 -- nvmf/common.sh@7 -- # uname -s 00:15:21.404 23:14:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.404 23:14:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.404 23:14:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.404 23:14:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.404 23:14:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.404 23:14:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.404 23:14:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.404 23:14:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.404 23:14:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.404 23:14:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.404 23:14:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:21.405 23:14:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:21.405 23:14:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.405 23:14:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.405 23:14:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.405 23:14:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:21.405 23:14:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.405 23:14:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.405 23:14:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.405 23:14:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.405 23:14:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.405 23:14:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.405 23:14:27 -- paths/export.sh@5 -- # export PATH 00:15:21.405 23:14:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.405 23:14:27 -- nvmf/common.sh@46 -- # : 0 00:15:21.405 23:14:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:21.405 23:14:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:21.405 23:14:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:21.405 23:14:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.405 23:14:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.405 23:14:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:21.405 23:14:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:21.405 23:14:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:21.405 23:14:27 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:21.405 23:14:27 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:21.405 23:14:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:21.405 23:14:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.405 23:14:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:21.405 23:14:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:21.405 23:14:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:21.405 23:14:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.405 23:14:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.405 23:14:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.405 23:14:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:21.405 23:14:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:21.405 23:14:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:21.405 23:14:27 -- common/autotest_common.sh@10 -- # set +x 00:15:27.978 23:14:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:27.978 23:14:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:27.978 23:14:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:27.978 23:14:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:27.978 23:14:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:27.978 23:14:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:27.978 23:14:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:27.978 23:14:33 -- nvmf/common.sh@294 -- # net_devs=() 00:15:27.978 23:14:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:27.978 23:14:33 -- nvmf/common.sh@295 -- # e810=() 00:15:27.978 23:14:33 -- nvmf/common.sh@295 -- # local -ga e810 00:15:27.978 23:14:33 -- nvmf/common.sh@296 -- # x722=() 00:15:27.978 23:14:33 -- nvmf/common.sh@296 -- # local -ga x722 00:15:27.978 23:14:33 -- nvmf/common.sh@297 -- # mlx=() 00:15:27.978 23:14:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:27.978 23:14:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.978 23:14:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:27.978 23:14:33 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:27.978 23:14:33 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:27.978 23:14:33 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:27.978 23:14:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:27.978 23:14:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:27.978 23:14:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:27.978 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:27.978 23:14:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.978 23:14:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:27.978 23:14:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:27.978 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:27.978 23:14:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.978 23:14:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:27.978 23:14:33 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:27.978 23:14:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:27.978 23:14:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.978 23:14:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:27.978 23:14:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.978 23:14:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:27.979 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.979 23:14:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.979 23:14:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:27.979 23:14:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.979 23:14:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:27.979 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.979 23:14:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:27.979 23:14:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:27.979 23:14:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:27.979 23:14:33 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:27.979 23:14:33 -- nvmf/common.sh@57 -- # uname 00:15:27.979 23:14:33 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:27.979 23:14:33 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:27.979 23:14:33 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:27.979 23:14:33 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:27.979 23:14:33 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:27.979 23:14:33 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:27.979 23:14:33 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:27.979 23:14:33 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:27.979 23:14:33 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:27.979 23:14:33 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:27.979 23:14:33 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:27.979 23:14:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.979 23:14:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:27.979 23:14:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:27.979 23:14:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.979 23:14:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:27.979 23:14:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@104 -- # continue 2 00:15:27.979 23:14:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@104 -- # continue 2 00:15:27.979 23:14:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:27.979 23:14:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.979 23:14:33 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:27.979 23:14:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:27.979 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.979 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:27.979 altname enp217s0f0np0 00:15:27.979 altname ens818f0np0 00:15:27.979 inet 192.168.100.8/24 scope global mlx_0_0 00:15:27.979 valid_lft forever preferred_lft forever 00:15:27.979 23:14:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:27.979 23:14:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.979 23:14:33 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:27.979 23:14:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:27.979 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.979 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:27.979 altname enp217s0f1np1 00:15:27.979 altname ens818f1np1 00:15:27.979 inet 192.168.100.9/24 scope global mlx_0_1 00:15:27.979 valid_lft forever preferred_lft forever 00:15:27.979 23:14:33 -- nvmf/common.sh@410 -- # return 0 00:15:27.979 23:14:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:27.979 23:14:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:27.979 23:14:33 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:27.979 23:14:33 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:27.979 23:14:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.979 23:14:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:27.979 23:14:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:27.979 23:14:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.979 23:14:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:27.979 23:14:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@104 -- # continue 2 00:15:27.979 23:14:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.979 23:14:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.979 23:14:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@104 -- # continue 2 00:15:27.979 23:14:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:27.979 23:14:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.979 23:14:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:27.979 23:14:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.979 23:14:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:28.239 23:14:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:28.239 192.168.100.9' 00:15:28.239 23:14:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:28.239 192.168.100.9' 00:15:28.239 23:14:33 -- nvmf/common.sh@445 -- # head -n 1 00:15:28.239 23:14:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:28.239 23:14:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:28.239 192.168.100.9' 00:15:28.239 23:14:33 -- nvmf/common.sh@446 -- # tail -n +2 00:15:28.239 23:14:33 -- nvmf/common.sh@446 -- # head -n 1 00:15:28.239 23:14:33 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:28.239 23:14:33 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:28.239 23:14:33 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:28.239 23:14:33 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:28.239 23:14:33 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:28.239 23:14:33 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:28.239 23:14:33 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:28.239 23:14:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.239 23:14:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:28.239 23:14:33 -- common/autotest_common.sh@10 -- # set +x 00:15:28.239 23:14:33 -- nvmf/common.sh@469 -- # nvmfpid=574547 00:15:28.239 23:14:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:28.239 23:14:33 -- nvmf/common.sh@470 -- # waitforlisten 574547 00:15:28.239 23:14:33 -- common/autotest_common.sh@819 -- # '[' -z 574547 ']' 00:15:28.239 23:14:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.239 23:14:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.239 23:14:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.239 23:14:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.239 23:14:33 -- common/autotest_common.sh@10 -- # set +x 00:15:28.239 [2024-11-02 23:14:33.841336] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:28.239 [2024-11-02 23:14:33.841386] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.239 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.239 [2024-11-02 23:14:33.910951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:28.239 [2024-11-02 23:14:33.983065] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.239 [2024-11-02 23:14:33.983177] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.239 [2024-11-02 23:14:33.983188] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.239 [2024-11-02 23:14:33.983196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.239 [2024-11-02 23:14:33.983236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.239 [2024-11-02 23:14:33.983320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.239 [2024-11-02 23:14:33.983322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.177 23:14:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.177 23:14:34 -- common/autotest_common.sh@852 -- # return 0 00:15:29.177 23:14:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.177 23:14:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:29.177 23:14:34 -- common/autotest_common.sh@10 -- # set +x 00:15:29.177 23:14:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.177 23:14:34 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:29.177 23:14:34 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:29.177 [2024-11-02 23:14:34.888821] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1931860/0x1935d50) succeed. 00:15:29.177 [2024-11-02 23:14:34.898041] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1932db0/0x19773f0) succeed. 00:15:29.436 23:14:35 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:29.696 23:14:35 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:29.696 [2024-11-02 23:14:35.365839] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.696 23:14:35 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:29.955 23:14:35 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:30.214 Malloc0 00:15:30.214 23:14:35 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:30.214 Delay0 00:15:30.214 23:14:35 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.474 23:14:36 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:30.732 NULL1 00:15:30.732 23:14:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:30.732 23:14:36 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:30.732 23:14:36 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=575085 00:15:30.732 23:14:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:30.732 23:14:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.991 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.929 Read completed with error (sct=0, sc=11) 00:15:31.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.930 23:14:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.195 23:14:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:32.195 23:14:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:32.565 true 00:15:32.565 23:14:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:32.565 23:14:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.134 23:14:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.393 23:14:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:33.393 23:14:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:33.652 true 00:15:33.652 23:14:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:33.652 23:14:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 23:14:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.594 23:14:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:34.594 23:14:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:34.854 true 00:15:34.854 23:14:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:34.854 23:14:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 23:14:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.792 23:14:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:35.792 23:14:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:36.051 true 00:15:36.051 23:14:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:36.051 23:14:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.988 23:14:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.988 23:14:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:36.988 23:14:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:37.248 true 00:15:37.248 23:14:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:37.248 23:14:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.185 23:14:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.186 23:14:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:38.186 23:14:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:38.445 true 00:15:38.445 23:14:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:38.445 23:14:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 23:14:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.383 23:14:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:39.383 23:14:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:39.643 true 00:15:39.643 23:14:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:39.643 23:14:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 23:14:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.580 23:14:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:40.580 23:14:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:40.839 true 00:15:40.839 23:14:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:40.839 23:14:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.777 23:14:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.777 23:14:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:41.777 23:14:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:42.037 true 00:15:42.037 23:14:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:42.037 23:14:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 23:14:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.974 23:14:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:42.974 23:14:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:43.234 true 00:15:43.234 23:14:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:43.234 23:14:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.171 23:14:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.171 23:14:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:44.171 23:14:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:44.430 true 00:15:44.430 23:14:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:44.430 23:14:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.367 23:14:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.368 23:14:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:45.368 23:14:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:45.627 true 00:15:45.627 23:14:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:45.627 23:14:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.565 23:14:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.565 23:14:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:46.565 23:14:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:46.824 true 00:15:46.824 23:14:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:46.824 23:14:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 23:14:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.761 23:14:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:47.761 23:14:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:48.020 true 00:15:48.020 23:14:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:48.020 23:14:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 23:14:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.957 23:14:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:48.957 23:14:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:49.216 true 00:15:49.216 23:14:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:49.216 23:14:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.154 23:14:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.154 23:14:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:50.154 23:14:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:50.414 true 00:15:50.414 23:14:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:50.414 23:14:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.350 23:14:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.609 23:14:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:51.609 23:14:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:51.609 true 00:15:51.609 23:14:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:51.609 23:14:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.547 23:14:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.806 23:14:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:52.806 23:14:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:52.806 true 00:15:52.806 23:14:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:52.806 23:14:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.744 23:14:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.005 23:14:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:54.005 23:14:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:54.005 true 00:15:54.005 23:14:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:54.005 23:14:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.942 23:15:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.202 23:15:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:55.202 23:15:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:55.202 true 00:15:55.202 23:15:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:55.202 23:15:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.139 23:15:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:56.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.399 23:15:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:56.399 23:15:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:56.399 true 00:15:56.399 23:15:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:56.399 23:15:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.337 23:15:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.596 23:15:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:57.596 23:15:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:57.854 true 00:15:57.855 23:15:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:57.855 23:15:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 23:15:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.792 23:15:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:58.792 23:15:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:59.051 true 00:15:59.051 23:15:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:15:59.051 23:15:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 23:15:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.987 23:15:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:59.987 23:15:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:00.245 true 00:16:00.245 23:15:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:16:00.245 23:15:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.182 23:15:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.182 23:15:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:01.182 23:15:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:01.441 true 00:16:01.441 23:15:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:16:01.441 23:15:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.700 23:15:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.700 23:15:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:01.700 23:15:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:01.959 true 00:16:01.959 23:15:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:16:01.959 23:15:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.218 23:15:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.218 23:15:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:02.218 23:15:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:02.477 true 00:16:02.477 23:15:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:16:02.477 23:15:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.735 23:15:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.993 23:15:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:02.993 23:15:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:02.993 true 00:16:02.993 23:15:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:16:02.993 23:15:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.993 Initializing NVMe Controllers 00:16:02.993 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.993 Controller IO queue size 128, less than required. 00:16:02.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:02.993 Controller IO queue size 128, less than required. 00:16:02.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:02.993 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:02.993 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:02.993 Initialization complete. Launching workers. 00:16:02.993 ======================================================== 00:16:02.994 Latency(us) 00:16:02.994 Device Information : IOPS MiB/s Average min max 00:16:02.994 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6081.37 2.97 18338.07 794.51 1132348.77 00:16:02.994 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35862.87 17.51 3569.10 1887.15 279925.77 00:16:02.994 ======================================================== 00:16:02.994 Total : 41944.23 20.48 5710.41 794.51 1132348.77 00:16:02.994 00:16:03.252 23:15:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.510 23:15:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:03.510 23:15:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:03.510 true 00:16:03.829 23:15:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 575085 00:16:03.829 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (575085) - No such process 00:16:03.829 23:15:09 -- target/ns_hotplug_stress.sh@53 -- # wait 575085 00:16:03.829 23:15:09 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.829 23:15:09 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:04.115 null0 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.115 23:15:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:04.373 null1 00:16:04.373 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.373 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.373 23:15:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:04.631 null2 00:16:04.631 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.631 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.631 23:15:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:04.631 null3 00:16:04.890 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.890 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.890 23:15:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:04.890 null4 00:16:04.890 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.890 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.890 23:15:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:05.148 null5 00:16:05.148 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:05.148 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:05.148 23:15:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:05.407 null6 00:16:05.407 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:05.407 23:15:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:05.407 23:15:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:05.407 null7 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:05.407 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@66 -- # wait 581651 581652 581654 581656 581658 581660 581662 581664 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.408 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:05.667 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.927 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.187 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.446 23:15:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.446 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.705 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.705 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.706 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.965 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.225 23:15:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.483 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.483 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.484 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.743 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:08.002 23:15:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.261 23:15:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.520 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:08.779 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.037 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.038 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:09.038 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.038 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.038 23:15:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:09.295 23:15:15 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:09.295 23:15:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:09.295 23:15:15 -- nvmf/common.sh@116 -- # sync 00:16:09.553 23:15:15 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:09.553 23:15:15 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:09.553 23:15:15 -- nvmf/common.sh@119 -- # set +e 00:16:09.553 23:15:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:09.553 23:15:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:09.553 rmmod nvme_rdma 00:16:09.553 rmmod nvme_fabrics 00:16:09.553 23:15:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:09.553 23:15:15 -- nvmf/common.sh@123 -- # set -e 00:16:09.553 23:15:15 -- nvmf/common.sh@124 -- # return 0 00:16:09.553 23:15:15 -- nvmf/common.sh@477 -- # '[' -n 574547 ']' 00:16:09.553 23:15:15 -- nvmf/common.sh@478 -- # killprocess 574547 00:16:09.553 23:15:15 -- common/autotest_common.sh@926 -- # '[' -z 574547 ']' 00:16:09.553 23:15:15 -- common/autotest_common.sh@930 -- # kill -0 574547 00:16:09.553 23:15:15 -- common/autotest_common.sh@931 -- # uname 00:16:09.553 23:15:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.553 23:15:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 574547 00:16:09.553 23:15:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:09.553 23:15:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:09.553 23:15:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 574547' 00:16:09.553 killing process with pid 574547 00:16:09.553 23:15:15 -- common/autotest_common.sh@945 -- # kill 574547 00:16:09.553 23:15:15 -- common/autotest_common.sh@950 -- # wait 574547 00:16:09.812 23:15:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:09.812 23:15:15 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:09.812 00:16:09.812 real 0m48.502s 00:16:09.812 user 3m18.825s 00:16:09.812 sys 0m13.618s 00:16:09.812 23:15:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.812 23:15:15 -- common/autotest_common.sh@10 -- # set +x 00:16:09.812 ************************************ 00:16:09.812 END TEST nvmf_ns_hotplug_stress 00:16:09.812 ************************************ 00:16:09.812 23:15:15 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:09.812 23:15:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:09.812 23:15:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.812 23:15:15 -- common/autotest_common.sh@10 -- # set +x 00:16:09.812 ************************************ 00:16:09.812 START TEST nvmf_connect_stress 00:16:09.812 ************************************ 00:16:09.812 23:15:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:09.812 * Looking for test storage... 00:16:09.812 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:09.812 23:15:15 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.812 23:15:15 -- nvmf/common.sh@7 -- # uname -s 00:16:09.812 23:15:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.812 23:15:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.812 23:15:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.812 23:15:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.812 23:15:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.812 23:15:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.812 23:15:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.812 23:15:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.812 23:15:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.812 23:15:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.813 23:15:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:09.813 23:15:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:09.813 23:15:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.813 23:15:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.813 23:15:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.813 23:15:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:09.813 23:15:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.813 23:15:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.813 23:15:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.813 23:15:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.813 23:15:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.813 23:15:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.813 23:15:15 -- paths/export.sh@5 -- # export PATH 00:16:09.813 23:15:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.813 23:15:15 -- nvmf/common.sh@46 -- # : 0 00:16:09.813 23:15:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:09.813 23:15:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:09.813 23:15:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:09.813 23:15:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.813 23:15:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.813 23:15:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:09.813 23:15:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:09.813 23:15:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:09.813 23:15:15 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:09.813 23:15:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:09.813 23:15:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.813 23:15:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:09.813 23:15:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:09.813 23:15:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:09.813 23:15:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.813 23:15:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.813 23:15:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.073 23:15:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:10.073 23:15:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:10.073 23:15:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:10.073 23:15:15 -- common/autotest_common.sh@10 -- # set +x 00:16:16.641 23:15:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:16.641 23:15:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:16.641 23:15:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:16.641 23:15:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:16.641 23:15:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:16.641 23:15:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:16.641 23:15:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:16.641 23:15:21 -- nvmf/common.sh@294 -- # net_devs=() 00:16:16.641 23:15:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:16.641 23:15:21 -- nvmf/common.sh@295 -- # e810=() 00:16:16.641 23:15:21 -- nvmf/common.sh@295 -- # local -ga e810 00:16:16.641 23:15:21 -- nvmf/common.sh@296 -- # x722=() 00:16:16.641 23:15:21 -- nvmf/common.sh@296 -- # local -ga x722 00:16:16.641 23:15:21 -- nvmf/common.sh@297 -- # mlx=() 00:16:16.641 23:15:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:16.641 23:15:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.641 23:15:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:16.642 23:15:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:16.642 23:15:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:16.642 23:15:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:16.642 23:15:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:16.642 23:15:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:16.642 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:16.642 23:15:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:16.642 23:15:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:16.642 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:16.642 23:15:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:16.642 23:15:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:16.642 23:15:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.642 23:15:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.642 23:15:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.642 23:15:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:16.642 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.642 23:15:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.642 23:15:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.642 23:15:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.642 23:15:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:16.642 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.642 23:15:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:16.642 23:15:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:16.642 23:15:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:16.642 23:15:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:16.642 23:15:21 -- nvmf/common.sh@57 -- # uname 00:16:16.642 23:15:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:16.642 23:15:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:16.642 23:15:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:16.642 23:15:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:16.642 23:15:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:16.642 23:15:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:16.642 23:15:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:16.642 23:15:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:16.642 23:15:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:16.642 23:15:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:16.642 23:15:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:16.642 23:15:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:16.642 23:15:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:16.642 23:15:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:16.642 23:15:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:16.642 23:15:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:16.642 23:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@104 -- # continue 2 00:16:16.642 23:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@104 -- # continue 2 00:16:16.642 23:15:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:16.642 23:15:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.642 23:15:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:16.642 23:15:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:16.642 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:16.642 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:16.642 altname enp217s0f0np0 00:16:16.642 altname ens818f0np0 00:16:16.642 inet 192.168.100.8/24 scope global mlx_0_0 00:16:16.642 valid_lft forever preferred_lft forever 00:16:16.642 23:15:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:16.642 23:15:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.642 23:15:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:16.642 23:15:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:16.642 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:16.642 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:16.642 altname enp217s0f1np1 00:16:16.642 altname ens818f1np1 00:16:16.642 inet 192.168.100.9/24 scope global mlx_0_1 00:16:16.642 valid_lft forever preferred_lft forever 00:16:16.642 23:15:21 -- nvmf/common.sh@410 -- # return 0 00:16:16.642 23:15:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.642 23:15:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:16.642 23:15:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:16.642 23:15:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:16.642 23:15:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:16.642 23:15:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:16.642 23:15:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:16.642 23:15:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:16.642 23:15:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:16.642 23:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@104 -- # continue 2 00:16:16.642 23:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.642 23:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:16.642 23:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@104 -- # continue 2 00:16:16.642 23:15:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:16.642 23:15:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.642 23:15:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:16.642 23:15:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.642 23:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.642 23:15:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:16.642 192.168.100.9' 00:16:16.642 23:15:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:16.642 192.168.100.9' 00:16:16.642 23:15:21 -- nvmf/common.sh@445 -- # head -n 1 00:16:16.642 23:15:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:16.642 23:15:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:16.642 192.168.100.9' 00:16:16.642 23:15:21 -- nvmf/common.sh@446 -- # tail -n +2 00:16:16.642 23:15:21 -- nvmf/common.sh@446 -- # head -n 1 00:16:16.642 23:15:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:16.642 23:15:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:16.642 23:15:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:16.642 23:15:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:16.642 23:15:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:16.642 23:15:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:16.642 23:15:21 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:16.642 23:15:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.642 23:15:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:16.642 23:15:21 -- common/autotest_common.sh@10 -- # set +x 00:16:16.642 23:15:21 -- nvmf/common.sh@469 -- # nvmfpid=585789 00:16:16.642 23:15:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:16.643 23:15:21 -- nvmf/common.sh@470 -- # waitforlisten 585789 00:16:16.643 23:15:21 -- common/autotest_common.sh@819 -- # '[' -z 585789 ']' 00:16:16.643 23:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.643 23:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.643 23:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.643 23:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.643 23:15:21 -- common/autotest_common.sh@10 -- # set +x 00:16:16.643 [2024-11-02 23:15:21.974051] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:16.643 [2024-11-02 23:15:21.974102] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.643 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.643 [2024-11-02 23:15:22.044092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:16.643 [2024-11-02 23:15:22.110639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.643 [2024-11-02 23:15:22.110774] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.643 [2024-11-02 23:15:22.110784] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.643 [2024-11-02 23:15:22.110792] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.643 [2024-11-02 23:15:22.110915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.643 [2024-11-02 23:15:22.111003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.643 [2024-11-02 23:15:22.111005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.211 23:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.211 23:15:22 -- common/autotest_common.sh@852 -- # return 0 00:16:17.211 23:15:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.211 23:15:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.211 23:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.211 23:15:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.211 23:15:22 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:17.211 23:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.211 23:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.211 [2024-11-02 23:15:22.868571] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a59860/0x1a5dd50) succeed. 00:16:17.211 [2024-11-02 23:15:22.877583] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a5adb0/0x1a9f3f0) succeed. 00:16:17.470 23:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.470 23:15:22 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:17.470 23:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.470 23:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 23:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.470 23:15:22 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:17.470 23:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.470 23:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 [2024-11-02 23:15:22.988027] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:17.470 23:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.470 23:15:22 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:17.470 23:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.470 23:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 NULL1 00:16:17.470 23:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.470 23:15:23 -- target/connect_stress.sh@21 -- # PERF_PID=586076 00:16:17.470 23:15:23 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:17.470 23:15:23 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:17.470 23:15:23 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.470 23:15:23 -- target/connect_stress.sh@28 -- # cat 00:16:17.470 23:15:23 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:17.470 23:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.470 23:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.470 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.729 23:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.729 23:15:23 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:17.729 23:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.729 23:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.729 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 23:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.296 23:15:23 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:18.296 23:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.296 23:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.296 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.555 23:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.555 23:15:24 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:18.555 23:15:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.555 23:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.555 23:15:24 -- common/autotest_common.sh@10 -- # set +x 00:16:18.814 23:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.814 23:15:24 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:18.814 23:15:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.814 23:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.814 23:15:24 -- common/autotest_common.sh@10 -- # set +x 00:16:19.073 23:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.073 23:15:24 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:19.073 23:15:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.073 23:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.073 23:15:24 -- common/autotest_common.sh@10 -- # set +x 00:16:19.331 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.332 23:15:25 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:19.332 23:15:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.332 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.332 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.899 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.899 23:15:25 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:19.899 23:15:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.899 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.899 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:16:20.157 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.158 23:15:25 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:20.158 23:15:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.158 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.158 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:16:20.416 23:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.416 23:15:26 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:20.416 23:15:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.416 23:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.416 23:15:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.675 23:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.675 23:15:26 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:20.675 23:15:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.675 23:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.675 23:15:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.934 23:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.934 23:15:26 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:20.934 23:15:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.934 23:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.934 23:15:26 -- common/autotest_common.sh@10 -- # set +x 00:16:21.502 23:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.502 23:15:27 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:21.502 23:15:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.502 23:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.502 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:16:21.761 23:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.761 23:15:27 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:21.761 23:15:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.761 23:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.761 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.019 23:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.019 23:15:27 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:22.019 23:15:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.019 23:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.019 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.278 23:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.278 23:15:27 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:22.278 23:15:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.278 23:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.278 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 23:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.845 23:15:28 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:22.845 23:15:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.845 23:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.845 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:23.104 23:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.104 23:15:28 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:23.104 23:15:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.104 23:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.104 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:23.363 23:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.363 23:15:28 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:23.363 23:15:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.363 23:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.363 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:23.623 23:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.623 23:15:29 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:23.623 23:15:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.623 23:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.623 23:15:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.880 23:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.880 23:15:29 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:23.880 23:15:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.880 23:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.880 23:15:29 -- common/autotest_common.sh@10 -- # set +x 00:16:24.447 23:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.447 23:15:29 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:24.447 23:15:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.447 23:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.447 23:15:29 -- common/autotest_common.sh@10 -- # set +x 00:16:24.706 23:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.706 23:15:30 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:24.706 23:15:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.706 23:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.706 23:15:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.965 23:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.965 23:15:30 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:24.965 23:15:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.965 23:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.965 23:15:30 -- common/autotest_common.sh@10 -- # set +x 00:16:25.224 23:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.224 23:15:30 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:25.224 23:15:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.224 23:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.224 23:15:30 -- common/autotest_common.sh@10 -- # set +x 00:16:25.793 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.793 23:15:31 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:25.793 23:15:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.793 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.793 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:16:26.052 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.052 23:15:31 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:26.052 23:15:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.052 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.052 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:16:26.311 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.311 23:15:31 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:26.311 23:15:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.311 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.311 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:16:26.570 23:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.570 23:15:32 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:26.570 23:15:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.570 23:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.570 23:15:32 -- common/autotest_common.sh@10 -- # set +x 00:16:26.829 23:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.829 23:15:32 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:26.829 23:15:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.829 23:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.829 23:15:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.398 23:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.398 23:15:32 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:27.398 23:15:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.398 23:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.398 23:15:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.398 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:27.657 23:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.657 23:15:33 -- target/connect_stress.sh@34 -- # kill -0 586076 00:16:27.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (586076) - No such process 00:16:27.657 23:15:33 -- target/connect_stress.sh@38 -- # wait 586076 00:16:27.657 23:15:33 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:27.657 23:15:33 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:27.657 23:15:33 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:27.657 23:15:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:27.657 23:15:33 -- nvmf/common.sh@116 -- # sync 00:16:27.657 23:15:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:27.657 23:15:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:27.657 23:15:33 -- nvmf/common.sh@119 -- # set +e 00:16:27.657 23:15:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:27.657 23:15:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:27.657 rmmod nvme_rdma 00:16:27.657 rmmod nvme_fabrics 00:16:27.657 23:15:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:27.657 23:15:33 -- nvmf/common.sh@123 -- # set -e 00:16:27.657 23:15:33 -- nvmf/common.sh@124 -- # return 0 00:16:27.657 23:15:33 -- nvmf/common.sh@477 -- # '[' -n 585789 ']' 00:16:27.657 23:15:33 -- nvmf/common.sh@478 -- # killprocess 585789 00:16:27.657 23:15:33 -- common/autotest_common.sh@926 -- # '[' -z 585789 ']' 00:16:27.657 23:15:33 -- common/autotest_common.sh@930 -- # kill -0 585789 00:16:27.657 23:15:33 -- common/autotest_common.sh@931 -- # uname 00:16:27.657 23:15:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:27.657 23:15:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 585789 00:16:27.657 23:15:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:27.657 23:15:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:27.657 23:15:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 585789' 00:16:27.657 killing process with pid 585789 00:16:27.657 23:15:33 -- common/autotest_common.sh@945 -- # kill 585789 00:16:27.657 23:15:33 -- common/autotest_common.sh@950 -- # wait 585789 00:16:27.916 23:15:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:27.916 23:15:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:27.916 00:16:27.916 real 0m18.147s 00:16:27.916 user 0m41.580s 00:16:27.916 sys 0m7.150s 00:16:27.916 23:15:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.916 23:15:33 -- common/autotest_common.sh@10 -- # set +x 00:16:27.916 ************************************ 00:16:27.916 END TEST nvmf_connect_stress 00:16:27.916 ************************************ 00:16:27.916 23:15:33 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:27.916 23:15:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:27.916 23:15:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.916 23:15:33 -- common/autotest_common.sh@10 -- # set +x 00:16:27.916 ************************************ 00:16:27.916 START TEST nvmf_fused_ordering 00:16:27.916 ************************************ 00:16:27.916 23:15:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:28.176 * Looking for test storage... 00:16:28.176 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:28.176 23:15:33 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.176 23:15:33 -- nvmf/common.sh@7 -- # uname -s 00:16:28.176 23:15:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.176 23:15:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.176 23:15:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.176 23:15:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.176 23:15:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.176 23:15:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.176 23:15:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.176 23:15:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.176 23:15:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.176 23:15:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.176 23:15:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:28.176 23:15:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:28.176 23:15:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.176 23:15:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.176 23:15:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.176 23:15:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:28.176 23:15:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.176 23:15:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.176 23:15:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.176 23:15:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.176 23:15:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.176 23:15:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.176 23:15:33 -- paths/export.sh@5 -- # export PATH 00:16:28.176 23:15:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.176 23:15:33 -- nvmf/common.sh@46 -- # : 0 00:16:28.176 23:15:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:28.176 23:15:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:28.176 23:15:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:28.176 23:15:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.176 23:15:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.176 23:15:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:28.176 23:15:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:28.176 23:15:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:28.176 23:15:33 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:28.176 23:15:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:28.176 23:15:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.176 23:15:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:28.176 23:15:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:28.176 23:15:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:28.176 23:15:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.176 23:15:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.176 23:15:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.176 23:15:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:28.176 23:15:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:28.176 23:15:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:28.176 23:15:33 -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 23:15:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:34.752 23:15:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:34.752 23:15:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:34.752 23:15:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:34.752 23:15:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:34.752 23:15:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:34.752 23:15:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:34.752 23:15:39 -- nvmf/common.sh@294 -- # net_devs=() 00:16:34.752 23:15:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:34.752 23:15:39 -- nvmf/common.sh@295 -- # e810=() 00:16:34.752 23:15:39 -- nvmf/common.sh@295 -- # local -ga e810 00:16:34.752 23:15:39 -- nvmf/common.sh@296 -- # x722=() 00:16:34.752 23:15:39 -- nvmf/common.sh@296 -- # local -ga x722 00:16:34.752 23:15:39 -- nvmf/common.sh@297 -- # mlx=() 00:16:34.752 23:15:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:34.752 23:15:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.752 23:15:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:34.752 23:15:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:34.752 23:15:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:34.752 23:15:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:34.752 23:15:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:34.752 23:15:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:34.752 23:15:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:34.752 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:34.752 23:15:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:34.752 23:15:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:34.752 23:15:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:34.752 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:34.752 23:15:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:34.752 23:15:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:34.752 23:15:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:34.752 23:15:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.752 23:15:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:34.752 23:15:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.752 23:15:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:34.752 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:34.752 23:15:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.752 23:15:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:34.752 23:15:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.752 23:15:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:34.752 23:15:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.752 23:15:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:34.752 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:34.752 23:15:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.752 23:15:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:34.752 23:15:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:34.752 23:15:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:34.752 23:15:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:34.752 23:15:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:34.752 23:15:39 -- nvmf/common.sh@57 -- # uname 00:16:34.752 23:15:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:34.752 23:15:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:34.753 23:15:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:34.753 23:15:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:34.753 23:15:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:34.753 23:15:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:34.753 23:15:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:34.753 23:15:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:34.753 23:15:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:34.753 23:15:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:34.753 23:15:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:34.753 23:15:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:34.753 23:15:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:34.753 23:15:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:34.753 23:15:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:34.753 23:15:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:34.753 23:15:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 23:15:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 23:15:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:34.753 23:15:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:34.753 23:15:39 -- nvmf/common.sh@104 -- # continue 2 00:16:34.753 23:15:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 23:15:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 23:15:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:34.753 23:15:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 23:15:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:34.753 23:15:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:34.753 23:15:39 -- nvmf/common.sh@104 -- # continue 2 00:16:34.753 23:15:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:34.753 23:15:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:34.753 23:15:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:34.753 23:15:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:34.753 23:15:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:34.753 23:15:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:34.753 23:15:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:34.753 23:15:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:34.753 23:15:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:34.753 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:34.753 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:34.753 altname enp217s0f0np0 00:16:34.753 altname ens818f0np0 00:16:34.753 inet 192.168.100.8/24 scope global mlx_0_0 00:16:34.753 valid_lft forever preferred_lft forever 00:16:34.753 23:15:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:34.753 23:15:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:34.753 23:15:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:34.753 23:15:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:34.753 23:15:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:34.753 23:15:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:34.753 23:15:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:34.753 23:15:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:34.753 23:15:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:34.753 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:34.753 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:34.753 altname enp217s0f1np1 00:16:34.753 altname ens818f1np1 00:16:34.753 inet 192.168.100.9/24 scope global mlx_0_1 00:16:34.753 valid_lft forever preferred_lft forever 00:16:34.753 23:15:39 -- nvmf/common.sh@410 -- # return 0 00:16:34.753 23:15:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:34.753 23:15:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:34.753 23:15:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:34.753 23:15:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:34.753 23:15:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:34.753 23:15:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:34.753 23:15:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:34.753 23:15:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:34.753 23:15:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:34.753 23:15:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:34.753 23:15:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 23:15:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 23:15:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:34.753 23:15:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:34.753 23:15:40 -- nvmf/common.sh@104 -- # continue 2 00:16:34.753 23:15:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:34.753 23:15:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 23:15:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:34.753 23:15:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.753 23:15:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:34.753 23:15:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:34.753 23:15:40 -- nvmf/common.sh@104 -- # continue 2 00:16:34.753 23:15:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:34.753 23:15:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:34.753 23:15:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:34.753 23:15:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:34.753 23:15:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:34.753 23:15:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:34.753 23:15:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:34.753 23:15:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:34.753 23:15:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:34.753 23:15:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:34.753 23:15:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:34.753 23:15:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:34.753 23:15:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:34.753 192.168.100.9' 00:16:34.753 23:15:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:34.753 192.168.100.9' 00:16:34.753 23:15:40 -- nvmf/common.sh@445 -- # head -n 1 00:16:34.753 23:15:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:34.753 23:15:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:34.753 192.168.100.9' 00:16:34.753 23:15:40 -- nvmf/common.sh@446 -- # tail -n +2 00:16:34.753 23:15:40 -- nvmf/common.sh@446 -- # head -n 1 00:16:34.753 23:15:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:34.753 23:15:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:34.753 23:15:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:34.753 23:15:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:34.753 23:15:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:34.753 23:15:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:34.753 23:15:40 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:34.753 23:15:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:34.753 23:15:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:34.753 23:15:40 -- common/autotest_common.sh@10 -- # set +x 00:16:34.753 23:15:40 -- nvmf/common.sh@469 -- # nvmfpid=591149 00:16:34.753 23:15:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:34.753 23:15:40 -- nvmf/common.sh@470 -- # waitforlisten 591149 00:16:34.753 23:15:40 -- common/autotest_common.sh@819 -- # '[' -z 591149 ']' 00:16:34.753 23:15:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.753 23:15:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:34.753 23:15:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.753 23:15:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:34.753 23:15:40 -- common/autotest_common.sh@10 -- # set +x 00:16:34.753 [2024-11-02 23:15:40.163093] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:34.753 [2024-11-02 23:15:40.163146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.753 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.753 [2024-11-02 23:15:40.234316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.753 [2024-11-02 23:15:40.307568] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:34.753 [2024-11-02 23:15:40.307682] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.753 [2024-11-02 23:15:40.307692] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.753 [2024-11-02 23:15:40.307701] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.753 [2024-11-02 23:15:40.307723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.322 23:15:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.322 23:15:40 -- common/autotest_common.sh@852 -- # return 0 00:16:35.322 23:15:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:35.322 23:15:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:35.322 23:15:40 -- common/autotest_common.sh@10 -- # set +x 00:16:35.322 23:15:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.322 23:15:41 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:35.322 23:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.322 23:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:35.322 [2024-11-02 23:15:41.056844] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17c6230/0x17ca720) succeed. 00:16:35.322 [2024-11-02 23:15:41.065916] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17c7730/0x180bdc0) succeed. 00:16:35.581 23:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.581 23:15:41 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:35.581 23:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.581 23:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:35.581 23:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.581 23:15:41 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:35.581 23:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.581 23:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:35.581 [2024-11-02 23:15:41.126787] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:35.581 23:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.581 23:15:41 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:35.581 23:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.581 23:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:35.581 NULL1 00:16:35.581 23:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.581 23:15:41 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:35.581 23:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.581 23:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:35.581 23:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.581 23:15:41 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:35.581 23:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.581 23:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:35.581 23:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.581 23:15:41 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:35.581 [2024-11-02 23:15:41.183277] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:35.581 [2024-11-02 23:15:41.183313] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591333 ] 00:16:35.581 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.841 Attached to nqn.2016-06.io.spdk:cnode1 00:16:35.841 Namespace ID: 1 size: 1GB 00:16:35.841 fused_ordering(0) 00:16:35.841 fused_ordering(1) 00:16:35.841 fused_ordering(2) 00:16:35.841 fused_ordering(3) 00:16:35.841 fused_ordering(4) 00:16:35.841 fused_ordering(5) 00:16:35.841 fused_ordering(6) 00:16:35.841 fused_ordering(7) 00:16:35.841 fused_ordering(8) 00:16:35.841 fused_ordering(9) 00:16:35.841 fused_ordering(10) 00:16:35.841 fused_ordering(11) 00:16:35.841 fused_ordering(12) 00:16:35.841 fused_ordering(13) 00:16:35.841 fused_ordering(14) 00:16:35.841 fused_ordering(15) 00:16:35.841 fused_ordering(16) 00:16:35.841 fused_ordering(17) 00:16:35.841 fused_ordering(18) 00:16:35.841 fused_ordering(19) 00:16:35.841 fused_ordering(20) 00:16:35.841 fused_ordering(21) 00:16:35.841 fused_ordering(22) 00:16:35.841 fused_ordering(23) 00:16:35.841 fused_ordering(24) 00:16:35.841 fused_ordering(25) 00:16:35.841 fused_ordering(26) 00:16:35.841 fused_ordering(27) 00:16:35.841 fused_ordering(28) 00:16:35.841 fused_ordering(29) 00:16:35.841 fused_ordering(30) 00:16:35.841 fused_ordering(31) 00:16:35.841 fused_ordering(32) 00:16:35.841 fused_ordering(33) 00:16:35.841 fused_ordering(34) 00:16:35.841 fused_ordering(35) 00:16:35.841 fused_ordering(36) 00:16:35.841 fused_ordering(37) 00:16:35.841 fused_ordering(38) 00:16:35.841 fused_ordering(39) 00:16:35.841 fused_ordering(40) 00:16:35.841 fused_ordering(41) 00:16:35.841 fused_ordering(42) 00:16:35.841 fused_ordering(43) 00:16:35.841 fused_ordering(44) 00:16:35.841 fused_ordering(45) 00:16:35.841 fused_ordering(46) 00:16:35.841 fused_ordering(47) 00:16:35.841 fused_ordering(48) 00:16:35.841 fused_ordering(49) 00:16:35.841 fused_ordering(50) 00:16:35.841 fused_ordering(51) 00:16:35.841 fused_ordering(52) 00:16:35.841 fused_ordering(53) 00:16:35.841 fused_ordering(54) 00:16:35.841 fused_ordering(55) 00:16:35.841 fused_ordering(56) 00:16:35.841 fused_ordering(57) 00:16:35.841 fused_ordering(58) 00:16:35.841 fused_ordering(59) 00:16:35.841 fused_ordering(60) 00:16:35.841 fused_ordering(61) 00:16:35.841 fused_ordering(62) 00:16:35.841 fused_ordering(63) 00:16:35.841 fused_ordering(64) 00:16:35.841 fused_ordering(65) 00:16:35.841 fused_ordering(66) 00:16:35.841 fused_ordering(67) 00:16:35.841 fused_ordering(68) 00:16:35.841 fused_ordering(69) 00:16:35.841 fused_ordering(70) 00:16:35.841 fused_ordering(71) 00:16:35.841 fused_ordering(72) 00:16:35.841 fused_ordering(73) 00:16:35.841 fused_ordering(74) 00:16:35.841 fused_ordering(75) 00:16:35.841 fused_ordering(76) 00:16:35.841 fused_ordering(77) 00:16:35.841 fused_ordering(78) 00:16:35.841 fused_ordering(79) 00:16:35.841 fused_ordering(80) 00:16:35.841 fused_ordering(81) 00:16:35.841 fused_ordering(82) 00:16:35.841 fused_ordering(83) 00:16:35.841 fused_ordering(84) 00:16:35.841 fused_ordering(85) 00:16:35.841 fused_ordering(86) 00:16:35.841 fused_ordering(87) 00:16:35.841 fused_ordering(88) 00:16:35.841 fused_ordering(89) 00:16:35.841 fused_ordering(90) 00:16:35.841 fused_ordering(91) 00:16:35.841 fused_ordering(92) 00:16:35.841 fused_ordering(93) 00:16:35.841 fused_ordering(94) 00:16:35.841 fused_ordering(95) 00:16:35.841 fused_ordering(96) 00:16:35.841 fused_ordering(97) 00:16:35.841 fused_ordering(98) 00:16:35.841 fused_ordering(99) 00:16:35.841 fused_ordering(100) 00:16:35.841 fused_ordering(101) 00:16:35.841 fused_ordering(102) 00:16:35.841 fused_ordering(103) 00:16:35.841 fused_ordering(104) 00:16:35.841 fused_ordering(105) 00:16:35.841 fused_ordering(106) 00:16:35.841 fused_ordering(107) 00:16:35.841 fused_ordering(108) 00:16:35.841 fused_ordering(109) 00:16:35.841 fused_ordering(110) 00:16:35.841 fused_ordering(111) 00:16:35.841 fused_ordering(112) 00:16:35.841 fused_ordering(113) 00:16:35.841 fused_ordering(114) 00:16:35.841 fused_ordering(115) 00:16:35.841 fused_ordering(116) 00:16:35.841 fused_ordering(117) 00:16:35.841 fused_ordering(118) 00:16:35.841 fused_ordering(119) 00:16:35.841 fused_ordering(120) 00:16:35.841 fused_ordering(121) 00:16:35.841 fused_ordering(122) 00:16:35.841 fused_ordering(123) 00:16:35.841 fused_ordering(124) 00:16:35.841 fused_ordering(125) 00:16:35.841 fused_ordering(126) 00:16:35.841 fused_ordering(127) 00:16:35.841 fused_ordering(128) 00:16:35.841 fused_ordering(129) 00:16:35.841 fused_ordering(130) 00:16:35.841 fused_ordering(131) 00:16:35.841 fused_ordering(132) 00:16:35.841 fused_ordering(133) 00:16:35.841 fused_ordering(134) 00:16:35.841 fused_ordering(135) 00:16:35.841 fused_ordering(136) 00:16:35.841 fused_ordering(137) 00:16:35.841 fused_ordering(138) 00:16:35.841 fused_ordering(139) 00:16:35.841 fused_ordering(140) 00:16:35.841 fused_ordering(141) 00:16:35.841 fused_ordering(142) 00:16:35.841 fused_ordering(143) 00:16:35.841 fused_ordering(144) 00:16:35.841 fused_ordering(145) 00:16:35.841 fused_ordering(146) 00:16:35.841 fused_ordering(147) 00:16:35.841 fused_ordering(148) 00:16:35.841 fused_ordering(149) 00:16:35.841 fused_ordering(150) 00:16:35.841 fused_ordering(151) 00:16:35.841 fused_ordering(152) 00:16:35.841 fused_ordering(153) 00:16:35.841 fused_ordering(154) 00:16:35.841 fused_ordering(155) 00:16:35.841 fused_ordering(156) 00:16:35.841 fused_ordering(157) 00:16:35.841 fused_ordering(158) 00:16:35.841 fused_ordering(159) 00:16:35.841 fused_ordering(160) 00:16:35.841 fused_ordering(161) 00:16:35.841 fused_ordering(162) 00:16:35.841 fused_ordering(163) 00:16:35.842 fused_ordering(164) 00:16:35.842 fused_ordering(165) 00:16:35.842 fused_ordering(166) 00:16:35.842 fused_ordering(167) 00:16:35.842 fused_ordering(168) 00:16:35.842 fused_ordering(169) 00:16:35.842 fused_ordering(170) 00:16:35.842 fused_ordering(171) 00:16:35.842 fused_ordering(172) 00:16:35.842 fused_ordering(173) 00:16:35.842 fused_ordering(174) 00:16:35.842 fused_ordering(175) 00:16:35.842 fused_ordering(176) 00:16:35.842 fused_ordering(177) 00:16:35.842 fused_ordering(178) 00:16:35.842 fused_ordering(179) 00:16:35.842 fused_ordering(180) 00:16:35.842 fused_ordering(181) 00:16:35.842 fused_ordering(182) 00:16:35.842 fused_ordering(183) 00:16:35.842 fused_ordering(184) 00:16:35.842 fused_ordering(185) 00:16:35.842 fused_ordering(186) 00:16:35.842 fused_ordering(187) 00:16:35.842 fused_ordering(188) 00:16:35.842 fused_ordering(189) 00:16:35.842 fused_ordering(190) 00:16:35.842 fused_ordering(191) 00:16:35.842 fused_ordering(192) 00:16:35.842 fused_ordering(193) 00:16:35.842 fused_ordering(194) 00:16:35.842 fused_ordering(195) 00:16:35.842 fused_ordering(196) 00:16:35.842 fused_ordering(197) 00:16:35.842 fused_ordering(198) 00:16:35.842 fused_ordering(199) 00:16:35.842 fused_ordering(200) 00:16:35.842 fused_ordering(201) 00:16:35.842 fused_ordering(202) 00:16:35.842 fused_ordering(203) 00:16:35.842 fused_ordering(204) 00:16:35.842 fused_ordering(205) 00:16:35.842 fused_ordering(206) 00:16:35.842 fused_ordering(207) 00:16:35.842 fused_ordering(208) 00:16:35.842 fused_ordering(209) 00:16:35.842 fused_ordering(210) 00:16:35.842 fused_ordering(211) 00:16:35.842 fused_ordering(212) 00:16:35.842 fused_ordering(213) 00:16:35.842 fused_ordering(214) 00:16:35.842 fused_ordering(215) 00:16:35.842 fused_ordering(216) 00:16:35.842 fused_ordering(217) 00:16:35.842 fused_ordering(218) 00:16:35.842 fused_ordering(219) 00:16:35.842 fused_ordering(220) 00:16:35.842 fused_ordering(221) 00:16:35.842 fused_ordering(222) 00:16:35.842 fused_ordering(223) 00:16:35.842 fused_ordering(224) 00:16:35.842 fused_ordering(225) 00:16:35.842 fused_ordering(226) 00:16:35.842 fused_ordering(227) 00:16:35.842 fused_ordering(228) 00:16:35.842 fused_ordering(229) 00:16:35.842 fused_ordering(230) 00:16:35.842 fused_ordering(231) 00:16:35.842 fused_ordering(232) 00:16:35.842 fused_ordering(233) 00:16:35.842 fused_ordering(234) 00:16:35.842 fused_ordering(235) 00:16:35.842 fused_ordering(236) 00:16:35.842 fused_ordering(237) 00:16:35.842 fused_ordering(238) 00:16:35.842 fused_ordering(239) 00:16:35.842 fused_ordering(240) 00:16:35.842 fused_ordering(241) 00:16:35.842 fused_ordering(242) 00:16:35.842 fused_ordering(243) 00:16:35.842 fused_ordering(244) 00:16:35.842 fused_ordering(245) 00:16:35.842 fused_ordering(246) 00:16:35.842 fused_ordering(247) 00:16:35.842 fused_ordering(248) 00:16:35.842 fused_ordering(249) 00:16:35.842 fused_ordering(250) 00:16:35.842 fused_ordering(251) 00:16:35.842 fused_ordering(252) 00:16:35.842 fused_ordering(253) 00:16:35.842 fused_ordering(254) 00:16:35.842 fused_ordering(255) 00:16:35.842 fused_ordering(256) 00:16:35.842 fused_ordering(257) 00:16:35.842 fused_ordering(258) 00:16:35.842 fused_ordering(259) 00:16:35.842 fused_ordering(260) 00:16:35.842 fused_ordering(261) 00:16:35.842 fused_ordering(262) 00:16:35.842 fused_ordering(263) 00:16:35.842 fused_ordering(264) 00:16:35.842 fused_ordering(265) 00:16:35.842 fused_ordering(266) 00:16:35.842 fused_ordering(267) 00:16:35.842 fused_ordering(268) 00:16:35.842 fused_ordering(269) 00:16:35.842 fused_ordering(270) 00:16:35.842 fused_ordering(271) 00:16:35.842 fused_ordering(272) 00:16:35.842 fused_ordering(273) 00:16:35.842 fused_ordering(274) 00:16:35.842 fused_ordering(275) 00:16:35.842 fused_ordering(276) 00:16:35.842 fused_ordering(277) 00:16:35.842 fused_ordering(278) 00:16:35.842 fused_ordering(279) 00:16:35.842 fused_ordering(280) 00:16:35.842 fused_ordering(281) 00:16:35.842 fused_ordering(282) 00:16:35.842 fused_ordering(283) 00:16:35.842 fused_ordering(284) 00:16:35.842 fused_ordering(285) 00:16:35.842 fused_ordering(286) 00:16:35.842 fused_ordering(287) 00:16:35.842 fused_ordering(288) 00:16:35.842 fused_ordering(289) 00:16:35.842 fused_ordering(290) 00:16:35.842 fused_ordering(291) 00:16:35.842 fused_ordering(292) 00:16:35.842 fused_ordering(293) 00:16:35.842 fused_ordering(294) 00:16:35.842 fused_ordering(295) 00:16:35.842 fused_ordering(296) 00:16:35.842 fused_ordering(297) 00:16:35.842 fused_ordering(298) 00:16:35.842 fused_ordering(299) 00:16:35.842 fused_ordering(300) 00:16:35.842 fused_ordering(301) 00:16:35.842 fused_ordering(302) 00:16:35.842 fused_ordering(303) 00:16:35.842 fused_ordering(304) 00:16:35.842 fused_ordering(305) 00:16:35.842 fused_ordering(306) 00:16:35.842 fused_ordering(307) 00:16:35.842 fused_ordering(308) 00:16:35.842 fused_ordering(309) 00:16:35.842 fused_ordering(310) 00:16:35.842 fused_ordering(311) 00:16:35.842 fused_ordering(312) 00:16:35.842 fused_ordering(313) 00:16:35.842 fused_ordering(314) 00:16:35.842 fused_ordering(315) 00:16:35.842 fused_ordering(316) 00:16:35.842 fused_ordering(317) 00:16:35.842 fused_ordering(318) 00:16:35.842 fused_ordering(319) 00:16:35.842 fused_ordering(320) 00:16:35.842 fused_ordering(321) 00:16:35.842 fused_ordering(322) 00:16:35.842 fused_ordering(323) 00:16:35.842 fused_ordering(324) 00:16:35.842 fused_ordering(325) 00:16:35.842 fused_ordering(326) 00:16:35.842 fused_ordering(327) 00:16:35.842 fused_ordering(328) 00:16:35.842 fused_ordering(329) 00:16:35.842 fused_ordering(330) 00:16:35.842 fused_ordering(331) 00:16:35.842 fused_ordering(332) 00:16:35.842 fused_ordering(333) 00:16:35.842 fused_ordering(334) 00:16:35.842 fused_ordering(335) 00:16:35.842 fused_ordering(336) 00:16:35.842 fused_ordering(337) 00:16:35.842 fused_ordering(338) 00:16:35.842 fused_ordering(339) 00:16:35.842 fused_ordering(340) 00:16:35.842 fused_ordering(341) 00:16:35.842 fused_ordering(342) 00:16:35.842 fused_ordering(343) 00:16:35.842 fused_ordering(344) 00:16:35.842 fused_ordering(345) 00:16:35.842 fused_ordering(346) 00:16:35.842 fused_ordering(347) 00:16:35.842 fused_ordering(348) 00:16:35.842 fused_ordering(349) 00:16:35.842 fused_ordering(350) 00:16:35.842 fused_ordering(351) 00:16:35.842 fused_ordering(352) 00:16:35.842 fused_ordering(353) 00:16:35.842 fused_ordering(354) 00:16:35.842 fused_ordering(355) 00:16:35.842 fused_ordering(356) 00:16:35.842 fused_ordering(357) 00:16:35.842 fused_ordering(358) 00:16:35.842 fused_ordering(359) 00:16:35.842 fused_ordering(360) 00:16:35.842 fused_ordering(361) 00:16:35.842 fused_ordering(362) 00:16:35.842 fused_ordering(363) 00:16:35.842 fused_ordering(364) 00:16:35.842 fused_ordering(365) 00:16:35.842 fused_ordering(366) 00:16:35.842 fused_ordering(367) 00:16:35.842 fused_ordering(368) 00:16:35.842 fused_ordering(369) 00:16:35.842 fused_ordering(370) 00:16:35.842 fused_ordering(371) 00:16:35.842 fused_ordering(372) 00:16:35.842 fused_ordering(373) 00:16:35.842 fused_ordering(374) 00:16:35.842 fused_ordering(375) 00:16:35.842 fused_ordering(376) 00:16:35.842 fused_ordering(377) 00:16:35.842 fused_ordering(378) 00:16:35.842 fused_ordering(379) 00:16:35.842 fused_ordering(380) 00:16:35.842 fused_ordering(381) 00:16:35.842 fused_ordering(382) 00:16:35.842 fused_ordering(383) 00:16:35.842 fused_ordering(384) 00:16:35.842 fused_ordering(385) 00:16:35.842 fused_ordering(386) 00:16:35.842 fused_ordering(387) 00:16:35.842 fused_ordering(388) 00:16:35.842 fused_ordering(389) 00:16:35.842 fused_ordering(390) 00:16:35.842 fused_ordering(391) 00:16:35.842 fused_ordering(392) 00:16:35.842 fused_ordering(393) 00:16:35.842 fused_ordering(394) 00:16:35.842 fused_ordering(395) 00:16:35.842 fused_ordering(396) 00:16:35.842 fused_ordering(397) 00:16:35.842 fused_ordering(398) 00:16:35.842 fused_ordering(399) 00:16:35.842 fused_ordering(400) 00:16:35.842 fused_ordering(401) 00:16:35.842 fused_ordering(402) 00:16:35.842 fused_ordering(403) 00:16:35.842 fused_ordering(404) 00:16:35.842 fused_ordering(405) 00:16:35.842 fused_ordering(406) 00:16:35.842 fused_ordering(407) 00:16:35.842 fused_ordering(408) 00:16:35.842 fused_ordering(409) 00:16:35.842 fused_ordering(410) 00:16:35.842 fused_ordering(411) 00:16:35.842 fused_ordering(412) 00:16:35.842 fused_ordering(413) 00:16:35.842 fused_ordering(414) 00:16:35.842 fused_ordering(415) 00:16:35.842 fused_ordering(416) 00:16:35.842 fused_ordering(417) 00:16:35.842 fused_ordering(418) 00:16:35.842 fused_ordering(419) 00:16:35.842 fused_ordering(420) 00:16:35.842 fused_ordering(421) 00:16:35.842 fused_ordering(422) 00:16:35.842 fused_ordering(423) 00:16:35.842 fused_ordering(424) 00:16:35.842 fused_ordering(425) 00:16:35.842 fused_ordering(426) 00:16:35.842 fused_ordering(427) 00:16:35.842 fused_ordering(428) 00:16:35.842 fused_ordering(429) 00:16:35.842 fused_ordering(430) 00:16:35.842 fused_ordering(431) 00:16:35.842 fused_ordering(432) 00:16:35.842 fused_ordering(433) 00:16:35.842 fused_ordering(434) 00:16:35.842 fused_ordering(435) 00:16:35.842 fused_ordering(436) 00:16:35.842 fused_ordering(437) 00:16:35.842 fused_ordering(438) 00:16:35.842 fused_ordering(439) 00:16:35.842 fused_ordering(440) 00:16:35.842 fused_ordering(441) 00:16:35.842 fused_ordering(442) 00:16:35.842 fused_ordering(443) 00:16:35.842 fused_ordering(444) 00:16:35.842 fused_ordering(445) 00:16:35.842 fused_ordering(446) 00:16:35.842 fused_ordering(447) 00:16:35.842 fused_ordering(448) 00:16:35.843 fused_ordering(449) 00:16:35.843 fused_ordering(450) 00:16:35.843 fused_ordering(451) 00:16:35.843 fused_ordering(452) 00:16:35.843 fused_ordering(453) 00:16:35.843 fused_ordering(454) 00:16:35.843 fused_ordering(455) 00:16:35.843 fused_ordering(456) 00:16:35.843 fused_ordering(457) 00:16:35.843 fused_ordering(458) 00:16:35.843 fused_ordering(459) 00:16:35.843 fused_ordering(460) 00:16:35.843 fused_ordering(461) 00:16:35.843 fused_ordering(462) 00:16:35.843 fused_ordering(463) 00:16:35.843 fused_ordering(464) 00:16:35.843 fused_ordering(465) 00:16:35.843 fused_ordering(466) 00:16:35.843 fused_ordering(467) 00:16:35.843 fused_ordering(468) 00:16:35.843 fused_ordering(469) 00:16:35.843 fused_ordering(470) 00:16:35.843 fused_ordering(471) 00:16:35.843 fused_ordering(472) 00:16:35.843 fused_ordering(473) 00:16:35.843 fused_ordering(474) 00:16:35.843 fused_ordering(475) 00:16:35.843 fused_ordering(476) 00:16:35.843 fused_ordering(477) 00:16:35.843 fused_ordering(478) 00:16:35.843 fused_ordering(479) 00:16:35.843 fused_ordering(480) 00:16:35.843 fused_ordering(481) 00:16:35.843 fused_ordering(482) 00:16:35.843 fused_ordering(483) 00:16:35.843 fused_ordering(484) 00:16:35.843 fused_ordering(485) 00:16:35.843 fused_ordering(486) 00:16:35.843 fused_ordering(487) 00:16:35.843 fused_ordering(488) 00:16:35.843 fused_ordering(489) 00:16:35.843 fused_ordering(490) 00:16:35.843 fused_ordering(491) 00:16:35.843 fused_ordering(492) 00:16:35.843 fused_ordering(493) 00:16:35.843 fused_ordering(494) 00:16:35.843 fused_ordering(495) 00:16:35.843 fused_ordering(496) 00:16:35.843 fused_ordering(497) 00:16:35.843 fused_ordering(498) 00:16:35.843 fused_ordering(499) 00:16:35.843 fused_ordering(500) 00:16:35.843 fused_ordering(501) 00:16:35.843 fused_ordering(502) 00:16:35.843 fused_ordering(503) 00:16:35.843 fused_ordering(504) 00:16:35.843 fused_ordering(505) 00:16:35.843 fused_ordering(506) 00:16:35.843 fused_ordering(507) 00:16:35.843 fused_ordering(508) 00:16:35.843 fused_ordering(509) 00:16:35.843 fused_ordering(510) 00:16:35.843 fused_ordering(511) 00:16:35.843 fused_ordering(512) 00:16:35.843 fused_ordering(513) 00:16:35.843 fused_ordering(514) 00:16:35.843 fused_ordering(515) 00:16:35.843 fused_ordering(516) 00:16:35.843 fused_ordering(517) 00:16:35.843 fused_ordering(518) 00:16:35.843 fused_ordering(519) 00:16:35.843 fused_ordering(520) 00:16:35.843 fused_ordering(521) 00:16:35.843 fused_ordering(522) 00:16:35.843 fused_ordering(523) 00:16:35.843 fused_ordering(524) 00:16:35.843 fused_ordering(525) 00:16:35.843 fused_ordering(526) 00:16:35.843 fused_ordering(527) 00:16:35.843 fused_ordering(528) 00:16:35.843 fused_ordering(529) 00:16:35.843 fused_ordering(530) 00:16:35.843 fused_ordering(531) 00:16:35.843 fused_ordering(532) 00:16:35.843 fused_ordering(533) 00:16:35.843 fused_ordering(534) 00:16:35.843 fused_ordering(535) 00:16:35.843 fused_ordering(536) 00:16:35.843 fused_ordering(537) 00:16:35.843 fused_ordering(538) 00:16:35.843 fused_ordering(539) 00:16:35.843 fused_ordering(540) 00:16:35.843 fused_ordering(541) 00:16:35.843 fused_ordering(542) 00:16:35.843 fused_ordering(543) 00:16:35.843 fused_ordering(544) 00:16:35.843 fused_ordering(545) 00:16:35.843 fused_ordering(546) 00:16:35.843 fused_ordering(547) 00:16:35.843 fused_ordering(548) 00:16:35.843 fused_ordering(549) 00:16:35.843 fused_ordering(550) 00:16:35.843 fused_ordering(551) 00:16:35.843 fused_ordering(552) 00:16:35.843 fused_ordering(553) 00:16:35.843 fused_ordering(554) 00:16:35.843 fused_ordering(555) 00:16:35.843 fused_ordering(556) 00:16:35.843 fused_ordering(557) 00:16:35.843 fused_ordering(558) 00:16:35.843 fused_ordering(559) 00:16:35.843 fused_ordering(560) 00:16:35.843 fused_ordering(561) 00:16:35.843 fused_ordering(562) 00:16:35.843 fused_ordering(563) 00:16:35.843 fused_ordering(564) 00:16:35.843 fused_ordering(565) 00:16:35.843 fused_ordering(566) 00:16:35.843 fused_ordering(567) 00:16:35.843 fused_ordering(568) 00:16:35.843 fused_ordering(569) 00:16:35.843 fused_ordering(570) 00:16:35.843 fused_ordering(571) 00:16:35.843 fused_ordering(572) 00:16:35.843 fused_ordering(573) 00:16:35.843 fused_ordering(574) 00:16:35.843 fused_ordering(575) 00:16:35.843 fused_ordering(576) 00:16:35.843 fused_ordering(577) 00:16:35.843 fused_ordering(578) 00:16:35.843 fused_ordering(579) 00:16:35.843 fused_ordering(580) 00:16:35.843 fused_ordering(581) 00:16:35.843 fused_ordering(582) 00:16:35.843 fused_ordering(583) 00:16:35.843 fused_ordering(584) 00:16:35.843 fused_ordering(585) 00:16:35.843 fused_ordering(586) 00:16:35.843 fused_ordering(587) 00:16:35.843 fused_ordering(588) 00:16:35.843 fused_ordering(589) 00:16:35.843 fused_ordering(590) 00:16:35.843 fused_ordering(591) 00:16:35.843 fused_ordering(592) 00:16:35.843 fused_ordering(593) 00:16:35.843 fused_ordering(594) 00:16:35.843 fused_ordering(595) 00:16:35.843 fused_ordering(596) 00:16:35.843 fused_ordering(597) 00:16:35.843 fused_ordering(598) 00:16:35.843 fused_ordering(599) 00:16:35.843 fused_ordering(600) 00:16:35.843 fused_ordering(601) 00:16:35.843 fused_ordering(602) 00:16:35.843 fused_ordering(603) 00:16:35.843 fused_ordering(604) 00:16:35.843 fused_ordering(605) 00:16:35.843 fused_ordering(606) 00:16:35.843 fused_ordering(607) 00:16:35.843 fused_ordering(608) 00:16:35.843 fused_ordering(609) 00:16:35.843 fused_ordering(610) 00:16:35.843 fused_ordering(611) 00:16:35.843 fused_ordering(612) 00:16:35.843 fused_ordering(613) 00:16:35.843 fused_ordering(614) 00:16:35.843 fused_ordering(615) 00:16:36.103 fused_ordering(616) 00:16:36.103 fused_ordering(617) 00:16:36.103 fused_ordering(618) 00:16:36.103 fused_ordering(619) 00:16:36.103 fused_ordering(620) 00:16:36.103 fused_ordering(621) 00:16:36.103 fused_ordering(622) 00:16:36.103 fused_ordering(623) 00:16:36.103 fused_ordering(624) 00:16:36.103 fused_ordering(625) 00:16:36.103 fused_ordering(626) 00:16:36.103 fused_ordering(627) 00:16:36.103 fused_ordering(628) 00:16:36.103 fused_ordering(629) 00:16:36.103 fused_ordering(630) 00:16:36.103 fused_ordering(631) 00:16:36.103 fused_ordering(632) 00:16:36.103 fused_ordering(633) 00:16:36.103 fused_ordering(634) 00:16:36.103 fused_ordering(635) 00:16:36.103 fused_ordering(636) 00:16:36.103 fused_ordering(637) 00:16:36.103 fused_ordering(638) 00:16:36.103 fused_ordering(639) 00:16:36.103 fused_ordering(640) 00:16:36.103 fused_ordering(641) 00:16:36.103 fused_ordering(642) 00:16:36.103 fused_ordering(643) 00:16:36.103 fused_ordering(644) 00:16:36.103 fused_ordering(645) 00:16:36.103 fused_ordering(646) 00:16:36.103 fused_ordering(647) 00:16:36.103 fused_ordering(648) 00:16:36.103 fused_ordering(649) 00:16:36.103 fused_ordering(650) 00:16:36.103 fused_ordering(651) 00:16:36.103 fused_ordering(652) 00:16:36.103 fused_ordering(653) 00:16:36.103 fused_ordering(654) 00:16:36.103 fused_ordering(655) 00:16:36.103 fused_ordering(656) 00:16:36.103 fused_ordering(657) 00:16:36.103 fused_ordering(658) 00:16:36.103 fused_ordering(659) 00:16:36.103 fused_ordering(660) 00:16:36.103 fused_ordering(661) 00:16:36.103 fused_ordering(662) 00:16:36.103 fused_ordering(663) 00:16:36.103 fused_ordering(664) 00:16:36.103 fused_ordering(665) 00:16:36.103 fused_ordering(666) 00:16:36.103 fused_ordering(667) 00:16:36.103 fused_ordering(668) 00:16:36.103 fused_ordering(669) 00:16:36.103 fused_ordering(670) 00:16:36.103 fused_ordering(671) 00:16:36.103 fused_ordering(672) 00:16:36.103 fused_ordering(673) 00:16:36.103 fused_ordering(674) 00:16:36.103 fused_ordering(675) 00:16:36.103 fused_ordering(676) 00:16:36.103 fused_ordering(677) 00:16:36.103 fused_ordering(678) 00:16:36.103 fused_ordering(679) 00:16:36.103 fused_ordering(680) 00:16:36.103 fused_ordering(681) 00:16:36.103 fused_ordering(682) 00:16:36.103 fused_ordering(683) 00:16:36.103 fused_ordering(684) 00:16:36.103 fused_ordering(685) 00:16:36.103 fused_ordering(686) 00:16:36.103 fused_ordering(687) 00:16:36.103 fused_ordering(688) 00:16:36.103 fused_ordering(689) 00:16:36.103 fused_ordering(690) 00:16:36.103 fused_ordering(691) 00:16:36.103 fused_ordering(692) 00:16:36.103 fused_ordering(693) 00:16:36.103 fused_ordering(694) 00:16:36.103 fused_ordering(695) 00:16:36.103 fused_ordering(696) 00:16:36.103 fused_ordering(697) 00:16:36.103 fused_ordering(698) 00:16:36.103 fused_ordering(699) 00:16:36.103 fused_ordering(700) 00:16:36.103 fused_ordering(701) 00:16:36.103 fused_ordering(702) 00:16:36.103 fused_ordering(703) 00:16:36.103 fused_ordering(704) 00:16:36.103 fused_ordering(705) 00:16:36.103 fused_ordering(706) 00:16:36.103 fused_ordering(707) 00:16:36.103 fused_ordering(708) 00:16:36.103 fused_ordering(709) 00:16:36.103 fused_ordering(710) 00:16:36.103 fused_ordering(711) 00:16:36.103 fused_ordering(712) 00:16:36.103 fused_ordering(713) 00:16:36.103 fused_ordering(714) 00:16:36.103 fused_ordering(715) 00:16:36.103 fused_ordering(716) 00:16:36.103 fused_ordering(717) 00:16:36.103 fused_ordering(718) 00:16:36.103 fused_ordering(719) 00:16:36.103 fused_ordering(720) 00:16:36.103 fused_ordering(721) 00:16:36.103 fused_ordering(722) 00:16:36.103 fused_ordering(723) 00:16:36.103 fused_ordering(724) 00:16:36.103 fused_ordering(725) 00:16:36.103 fused_ordering(726) 00:16:36.103 fused_ordering(727) 00:16:36.103 fused_ordering(728) 00:16:36.103 fused_ordering(729) 00:16:36.103 fused_ordering(730) 00:16:36.103 fused_ordering(731) 00:16:36.103 fused_ordering(732) 00:16:36.103 fused_ordering(733) 00:16:36.103 fused_ordering(734) 00:16:36.103 fused_ordering(735) 00:16:36.103 fused_ordering(736) 00:16:36.103 fused_ordering(737) 00:16:36.103 fused_ordering(738) 00:16:36.103 fused_ordering(739) 00:16:36.103 fused_ordering(740) 00:16:36.103 fused_ordering(741) 00:16:36.103 fused_ordering(742) 00:16:36.103 fused_ordering(743) 00:16:36.103 fused_ordering(744) 00:16:36.103 fused_ordering(745) 00:16:36.103 fused_ordering(746) 00:16:36.103 fused_ordering(747) 00:16:36.103 fused_ordering(748) 00:16:36.103 fused_ordering(749) 00:16:36.103 fused_ordering(750) 00:16:36.103 fused_ordering(751) 00:16:36.103 fused_ordering(752) 00:16:36.103 fused_ordering(753) 00:16:36.103 fused_ordering(754) 00:16:36.103 fused_ordering(755) 00:16:36.103 fused_ordering(756) 00:16:36.103 fused_ordering(757) 00:16:36.103 fused_ordering(758) 00:16:36.103 fused_ordering(759) 00:16:36.103 fused_ordering(760) 00:16:36.103 fused_ordering(761) 00:16:36.103 fused_ordering(762) 00:16:36.103 fused_ordering(763) 00:16:36.103 fused_ordering(764) 00:16:36.103 fused_ordering(765) 00:16:36.103 fused_ordering(766) 00:16:36.103 fused_ordering(767) 00:16:36.103 fused_ordering(768) 00:16:36.103 fused_ordering(769) 00:16:36.103 fused_ordering(770) 00:16:36.103 fused_ordering(771) 00:16:36.103 fused_ordering(772) 00:16:36.103 fused_ordering(773) 00:16:36.103 fused_ordering(774) 00:16:36.103 fused_ordering(775) 00:16:36.103 fused_ordering(776) 00:16:36.103 fused_ordering(777) 00:16:36.103 fused_ordering(778) 00:16:36.103 fused_ordering(779) 00:16:36.103 fused_ordering(780) 00:16:36.103 fused_ordering(781) 00:16:36.103 fused_ordering(782) 00:16:36.103 fused_ordering(783) 00:16:36.103 fused_ordering(784) 00:16:36.104 fused_ordering(785) 00:16:36.104 fused_ordering(786) 00:16:36.104 fused_ordering(787) 00:16:36.104 fused_ordering(788) 00:16:36.104 fused_ordering(789) 00:16:36.104 fused_ordering(790) 00:16:36.104 fused_ordering(791) 00:16:36.104 fused_ordering(792) 00:16:36.104 fused_ordering(793) 00:16:36.104 fused_ordering(794) 00:16:36.104 fused_ordering(795) 00:16:36.104 fused_ordering(796) 00:16:36.104 fused_ordering(797) 00:16:36.104 fused_ordering(798) 00:16:36.104 fused_ordering(799) 00:16:36.104 fused_ordering(800) 00:16:36.104 fused_ordering(801) 00:16:36.104 fused_ordering(802) 00:16:36.104 fused_ordering(803) 00:16:36.104 fused_ordering(804) 00:16:36.104 fused_ordering(805) 00:16:36.104 fused_ordering(806) 00:16:36.104 fused_ordering(807) 00:16:36.104 fused_ordering(808) 00:16:36.104 fused_ordering(809) 00:16:36.104 fused_ordering(810) 00:16:36.104 fused_ordering(811) 00:16:36.104 fused_ordering(812) 00:16:36.104 fused_ordering(813) 00:16:36.104 fused_ordering(814) 00:16:36.104 fused_ordering(815) 00:16:36.104 fused_ordering(816) 00:16:36.104 fused_ordering(817) 00:16:36.104 fused_ordering(818) 00:16:36.104 fused_ordering(819) 00:16:36.104 fused_ordering(820) 00:16:36.104 fused_ordering(821) 00:16:36.104 fused_ordering(822) 00:16:36.104 fused_ordering(823) 00:16:36.104 fused_ordering(824) 00:16:36.104 fused_ordering(825) 00:16:36.104 fused_ordering(826) 00:16:36.104 fused_ordering(827) 00:16:36.104 fused_ordering(828) 00:16:36.104 fused_ordering(829) 00:16:36.104 fused_ordering(830) 00:16:36.104 fused_ordering(831) 00:16:36.104 fused_ordering(832) 00:16:36.104 fused_ordering(833) 00:16:36.104 fused_ordering(834) 00:16:36.104 fused_ordering(835) 00:16:36.104 fused_ordering(836) 00:16:36.104 fused_ordering(837) 00:16:36.104 fused_ordering(838) 00:16:36.104 fused_ordering(839) 00:16:36.104 fused_ordering(840) 00:16:36.104 fused_ordering(841) 00:16:36.104 fused_ordering(842) 00:16:36.104 fused_ordering(843) 00:16:36.104 fused_ordering(844) 00:16:36.104 fused_ordering(845) 00:16:36.104 fused_ordering(846) 00:16:36.104 fused_ordering(847) 00:16:36.104 fused_ordering(848) 00:16:36.104 fused_ordering(849) 00:16:36.104 fused_ordering(850) 00:16:36.104 fused_ordering(851) 00:16:36.104 fused_ordering(852) 00:16:36.104 fused_ordering(853) 00:16:36.104 fused_ordering(854) 00:16:36.104 fused_ordering(855) 00:16:36.104 fused_ordering(856) 00:16:36.104 fused_ordering(857) 00:16:36.104 fused_ordering(858) 00:16:36.104 fused_ordering(859) 00:16:36.104 fused_ordering(860) 00:16:36.104 fused_ordering(861) 00:16:36.104 fused_ordering(862) 00:16:36.104 fused_ordering(863) 00:16:36.104 fused_ordering(864) 00:16:36.104 fused_ordering(865) 00:16:36.104 fused_ordering(866) 00:16:36.104 fused_ordering(867) 00:16:36.104 fused_ordering(868) 00:16:36.104 fused_ordering(869) 00:16:36.104 fused_ordering(870) 00:16:36.104 fused_ordering(871) 00:16:36.104 fused_ordering(872) 00:16:36.104 fused_ordering(873) 00:16:36.104 fused_ordering(874) 00:16:36.104 fused_ordering(875) 00:16:36.104 fused_ordering(876) 00:16:36.104 fused_ordering(877) 00:16:36.104 fused_ordering(878) 00:16:36.104 fused_ordering(879) 00:16:36.104 fused_ordering(880) 00:16:36.104 fused_ordering(881) 00:16:36.104 fused_ordering(882) 00:16:36.104 fused_ordering(883) 00:16:36.104 fused_ordering(884) 00:16:36.104 fused_ordering(885) 00:16:36.104 fused_ordering(886) 00:16:36.104 fused_ordering(887) 00:16:36.104 fused_ordering(888) 00:16:36.104 fused_ordering(889) 00:16:36.104 fused_ordering(890) 00:16:36.104 fused_ordering(891) 00:16:36.104 fused_ordering(892) 00:16:36.104 fused_ordering(893) 00:16:36.104 fused_ordering(894) 00:16:36.104 fused_ordering(895) 00:16:36.104 fused_ordering(896) 00:16:36.104 fused_ordering(897) 00:16:36.104 fused_ordering(898) 00:16:36.104 fused_ordering(899) 00:16:36.104 fused_ordering(900) 00:16:36.104 fused_ordering(901) 00:16:36.104 fused_ordering(902) 00:16:36.104 fused_ordering(903) 00:16:36.104 fused_ordering(904) 00:16:36.104 fused_ordering(905) 00:16:36.104 fused_ordering(906) 00:16:36.104 fused_ordering(907) 00:16:36.104 fused_ordering(908) 00:16:36.104 fused_ordering(909) 00:16:36.104 fused_ordering(910) 00:16:36.104 fused_ordering(911) 00:16:36.104 fused_ordering(912) 00:16:36.104 fused_ordering(913) 00:16:36.104 fused_ordering(914) 00:16:36.104 fused_ordering(915) 00:16:36.104 fused_ordering(916) 00:16:36.104 fused_ordering(917) 00:16:36.104 fused_ordering(918) 00:16:36.104 fused_ordering(919) 00:16:36.104 fused_ordering(920) 00:16:36.104 fused_ordering(921) 00:16:36.104 fused_ordering(922) 00:16:36.104 fused_ordering(923) 00:16:36.104 fused_ordering(924) 00:16:36.104 fused_ordering(925) 00:16:36.104 fused_ordering(926) 00:16:36.104 fused_ordering(927) 00:16:36.104 fused_ordering(928) 00:16:36.104 fused_ordering(929) 00:16:36.104 fused_ordering(930) 00:16:36.104 fused_ordering(931) 00:16:36.104 fused_ordering(932) 00:16:36.104 fused_ordering(933) 00:16:36.104 fused_ordering(934) 00:16:36.104 fused_ordering(935) 00:16:36.104 fused_ordering(936) 00:16:36.104 fused_ordering(937) 00:16:36.104 fused_ordering(938) 00:16:36.104 fused_ordering(939) 00:16:36.104 fused_ordering(940) 00:16:36.104 fused_ordering(941) 00:16:36.104 fused_ordering(942) 00:16:36.104 fused_ordering(943) 00:16:36.104 fused_ordering(944) 00:16:36.104 fused_ordering(945) 00:16:36.104 fused_ordering(946) 00:16:36.104 fused_ordering(947) 00:16:36.104 fused_ordering(948) 00:16:36.104 fused_ordering(949) 00:16:36.104 fused_ordering(950) 00:16:36.104 fused_ordering(951) 00:16:36.104 fused_ordering(952) 00:16:36.104 fused_ordering(953) 00:16:36.104 fused_ordering(954) 00:16:36.104 fused_ordering(955) 00:16:36.104 fused_ordering(956) 00:16:36.104 fused_ordering(957) 00:16:36.104 fused_ordering(958) 00:16:36.104 fused_ordering(959) 00:16:36.104 fused_ordering(960) 00:16:36.104 fused_ordering(961) 00:16:36.104 fused_ordering(962) 00:16:36.104 fused_ordering(963) 00:16:36.104 fused_ordering(964) 00:16:36.104 fused_ordering(965) 00:16:36.104 fused_ordering(966) 00:16:36.104 fused_ordering(967) 00:16:36.104 fused_ordering(968) 00:16:36.104 fused_ordering(969) 00:16:36.104 fused_ordering(970) 00:16:36.104 fused_ordering(971) 00:16:36.104 fused_ordering(972) 00:16:36.104 fused_ordering(973) 00:16:36.104 fused_ordering(974) 00:16:36.104 fused_ordering(975) 00:16:36.104 fused_ordering(976) 00:16:36.104 fused_ordering(977) 00:16:36.104 fused_ordering(978) 00:16:36.104 fused_ordering(979) 00:16:36.104 fused_ordering(980) 00:16:36.104 fused_ordering(981) 00:16:36.104 fused_ordering(982) 00:16:36.104 fused_ordering(983) 00:16:36.104 fused_ordering(984) 00:16:36.104 fused_ordering(985) 00:16:36.104 fused_ordering(986) 00:16:36.104 fused_ordering(987) 00:16:36.104 fused_ordering(988) 00:16:36.104 fused_ordering(989) 00:16:36.104 fused_ordering(990) 00:16:36.104 fused_ordering(991) 00:16:36.104 fused_ordering(992) 00:16:36.104 fused_ordering(993) 00:16:36.104 fused_ordering(994) 00:16:36.104 fused_ordering(995) 00:16:36.104 fused_ordering(996) 00:16:36.104 fused_ordering(997) 00:16:36.104 fused_ordering(998) 00:16:36.104 fused_ordering(999) 00:16:36.104 fused_ordering(1000) 00:16:36.104 fused_ordering(1001) 00:16:36.104 fused_ordering(1002) 00:16:36.104 fused_ordering(1003) 00:16:36.105 fused_ordering(1004) 00:16:36.105 fused_ordering(1005) 00:16:36.105 fused_ordering(1006) 00:16:36.105 fused_ordering(1007) 00:16:36.105 fused_ordering(1008) 00:16:36.105 fused_ordering(1009) 00:16:36.105 fused_ordering(1010) 00:16:36.105 fused_ordering(1011) 00:16:36.105 fused_ordering(1012) 00:16:36.105 fused_ordering(1013) 00:16:36.105 fused_ordering(1014) 00:16:36.105 fused_ordering(1015) 00:16:36.105 fused_ordering(1016) 00:16:36.105 fused_ordering(1017) 00:16:36.105 fused_ordering(1018) 00:16:36.105 fused_ordering(1019) 00:16:36.105 fused_ordering(1020) 00:16:36.105 fused_ordering(1021) 00:16:36.105 fused_ordering(1022) 00:16:36.105 fused_ordering(1023) 00:16:36.105 23:15:41 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:36.105 23:15:41 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:36.105 23:15:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:36.105 23:15:41 -- nvmf/common.sh@116 -- # sync 00:16:36.105 23:15:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:36.105 23:15:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:36.105 23:15:41 -- nvmf/common.sh@119 -- # set +e 00:16:36.105 23:15:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:36.105 23:15:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:36.364 rmmod nvme_rdma 00:16:36.364 rmmod nvme_fabrics 00:16:36.364 23:15:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:36.364 23:15:41 -- nvmf/common.sh@123 -- # set -e 00:16:36.364 23:15:41 -- nvmf/common.sh@124 -- # return 0 00:16:36.364 23:15:41 -- nvmf/common.sh@477 -- # '[' -n 591149 ']' 00:16:36.364 23:15:41 -- nvmf/common.sh@478 -- # killprocess 591149 00:16:36.364 23:15:41 -- common/autotest_common.sh@926 -- # '[' -z 591149 ']' 00:16:36.364 23:15:41 -- common/autotest_common.sh@930 -- # kill -0 591149 00:16:36.364 23:15:41 -- common/autotest_common.sh@931 -- # uname 00:16:36.364 23:15:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:36.364 23:15:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 591149 00:16:36.364 23:15:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:36.364 23:15:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:36.364 23:15:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 591149' 00:16:36.364 killing process with pid 591149 00:16:36.364 23:15:41 -- common/autotest_common.sh@945 -- # kill 591149 00:16:36.364 23:15:41 -- common/autotest_common.sh@950 -- # wait 591149 00:16:36.624 23:15:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:36.624 23:15:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:36.624 00:16:36.624 real 0m8.527s 00:16:36.624 user 0m4.618s 00:16:36.624 sys 0m5.168s 00:16:36.624 23:15:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.624 23:15:42 -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 ************************************ 00:16:36.624 END TEST nvmf_fused_ordering 00:16:36.624 ************************************ 00:16:36.624 23:15:42 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:36.624 23:15:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:36.624 23:15:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:36.624 23:15:42 -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 ************************************ 00:16:36.624 START TEST nvmf_delete_subsystem 00:16:36.624 ************************************ 00:16:36.624 23:15:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:36.624 * Looking for test storage... 00:16:36.624 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:36.624 23:15:42 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.624 23:15:42 -- nvmf/common.sh@7 -- # uname -s 00:16:36.624 23:15:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.624 23:15:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.624 23:15:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.624 23:15:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.624 23:15:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.624 23:15:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.624 23:15:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.624 23:15:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.624 23:15:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.624 23:15:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.624 23:15:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:36.624 23:15:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:36.624 23:15:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.624 23:15:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.624 23:15:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.624 23:15:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:36.624 23:15:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.624 23:15:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.624 23:15:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.624 23:15:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.624 23:15:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.624 23:15:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.624 23:15:42 -- paths/export.sh@5 -- # export PATH 00:16:36.624 23:15:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.624 23:15:42 -- nvmf/common.sh@46 -- # : 0 00:16:36.624 23:15:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:36.624 23:15:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:36.624 23:15:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:36.624 23:15:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.624 23:15:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.624 23:15:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:36.624 23:15:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:36.624 23:15:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:36.624 23:15:42 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:36.624 23:15:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:36.624 23:15:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.624 23:15:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:36.624 23:15:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:36.624 23:15:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:36.624 23:15:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.624 23:15:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.624 23:15:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.883 23:15:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:36.883 23:15:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:36.883 23:15:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:36.883 23:15:42 -- common/autotest_common.sh@10 -- # set +x 00:16:43.529 23:15:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:43.529 23:15:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:43.529 23:15:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:43.529 23:15:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:43.529 23:15:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:43.529 23:15:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:43.529 23:15:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:43.529 23:15:48 -- nvmf/common.sh@294 -- # net_devs=() 00:16:43.529 23:15:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:43.529 23:15:48 -- nvmf/common.sh@295 -- # e810=() 00:16:43.529 23:15:48 -- nvmf/common.sh@295 -- # local -ga e810 00:16:43.529 23:15:48 -- nvmf/common.sh@296 -- # x722=() 00:16:43.529 23:15:48 -- nvmf/common.sh@296 -- # local -ga x722 00:16:43.529 23:15:48 -- nvmf/common.sh@297 -- # mlx=() 00:16:43.529 23:15:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:43.529 23:15:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.529 23:15:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:43.529 23:15:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:43.529 23:15:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:43.529 23:15:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:43.529 23:15:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:43.529 23:15:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:43.529 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:43.529 23:15:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:43.529 23:15:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:43.529 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:43.529 23:15:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:43.529 23:15:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:43.529 23:15:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.529 23:15:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:43.529 23:15:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.529 23:15:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:43.529 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:43.529 23:15:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.529 23:15:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.529 23:15:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:43.529 23:15:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.529 23:15:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:43.529 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:43.529 23:15:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.529 23:15:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:43.529 23:15:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:43.529 23:15:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:43.529 23:15:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:43.529 23:15:48 -- nvmf/common.sh@57 -- # uname 00:16:43.529 23:15:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:43.529 23:15:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:43.529 23:15:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:43.529 23:15:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:43.529 23:15:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:43.529 23:15:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:43.529 23:15:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:43.529 23:15:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:43.529 23:15:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:43.529 23:15:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:43.529 23:15:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:43.529 23:15:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:43.529 23:15:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:43.529 23:15:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:43.529 23:15:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:43.529 23:15:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:43.529 23:15:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:43.529 23:15:48 -- nvmf/common.sh@104 -- # continue 2 00:16:43.529 23:15:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.529 23:15:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:43.529 23:15:48 -- nvmf/common.sh@104 -- # continue 2 00:16:43.529 23:15:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:43.529 23:15:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:43.529 23:15:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:43.529 23:15:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:43.529 23:15:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:43.529 23:15:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:43.529 23:15:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:43.529 23:15:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:43.529 23:15:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:43.529 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:43.529 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:43.529 altname enp217s0f0np0 00:16:43.529 altname ens818f0np0 00:16:43.529 inet 192.168.100.8/24 scope global mlx_0_0 00:16:43.529 valid_lft forever preferred_lft forever 00:16:43.529 23:15:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:43.529 23:15:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:43.529 23:15:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:43.529 23:15:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:43.529 23:15:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:43.529 23:15:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:43.529 23:15:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:43.529 23:15:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:43.529 23:15:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:43.529 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:43.529 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:43.529 altname enp217s0f1np1 00:16:43.529 altname ens818f1np1 00:16:43.529 inet 192.168.100.9/24 scope global mlx_0_1 00:16:43.529 valid_lft forever preferred_lft forever 00:16:43.529 23:15:49 -- nvmf/common.sh@410 -- # return 0 00:16:43.529 23:15:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:43.529 23:15:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:43.529 23:15:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:43.529 23:15:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:43.529 23:15:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:43.529 23:15:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:43.529 23:15:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:43.529 23:15:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:43.529 23:15:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:43.529 23:15:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:43.529 23:15:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:43.529 23:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.529 23:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:43.529 23:15:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:43.530 23:15:49 -- nvmf/common.sh@104 -- # continue 2 00:16:43.530 23:15:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:43.530 23:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.530 23:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:43.530 23:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.530 23:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:43.530 23:15:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:43.530 23:15:49 -- nvmf/common.sh@104 -- # continue 2 00:16:43.530 23:15:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:43.530 23:15:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:43.530 23:15:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:43.530 23:15:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:43.530 23:15:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:43.530 23:15:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:43.530 23:15:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:43.530 23:15:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:43.530 23:15:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:43.530 23:15:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:43.530 23:15:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:43.530 23:15:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:43.530 23:15:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:43.530 192.168.100.9' 00:16:43.530 23:15:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:43.530 192.168.100.9' 00:16:43.530 23:15:49 -- nvmf/common.sh@445 -- # head -n 1 00:16:43.530 23:15:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:43.530 23:15:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:43.530 192.168.100.9' 00:16:43.530 23:15:49 -- nvmf/common.sh@446 -- # tail -n +2 00:16:43.530 23:15:49 -- nvmf/common.sh@446 -- # head -n 1 00:16:43.530 23:15:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:43.530 23:15:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:43.530 23:15:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:43.530 23:15:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:43.530 23:15:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:43.530 23:15:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:43.530 23:15:49 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:43.530 23:15:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.530 23:15:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:43.530 23:15:49 -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 23:15:49 -- nvmf/common.sh@469 -- # nvmfpid=594880 00:16:43.530 23:15:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:43.530 23:15:49 -- nvmf/common.sh@470 -- # waitforlisten 594880 00:16:43.530 23:15:49 -- common/autotest_common.sh@819 -- # '[' -z 594880 ']' 00:16:43.530 23:15:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.530 23:15:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:43.530 23:15:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.530 23:15:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:43.530 23:15:49 -- common/autotest_common.sh@10 -- # set +x 00:16:43.530 [2024-11-02 23:15:49.166681] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:43.530 [2024-11-02 23:15:49.166728] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.530 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.530 [2024-11-02 23:15:49.236271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.789 [2024-11-02 23:15:49.309434] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:43.789 [2024-11-02 23:15:49.309540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.789 [2024-11-02 23:15:49.309550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.789 [2024-11-02 23:15:49.309559] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.789 [2024-11-02 23:15:49.309604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.789 [2024-11-02 23:15:49.309607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.356 23:15:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:44.356 23:15:49 -- common/autotest_common.sh@852 -- # return 0 00:16:44.356 23:15:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:44.356 23:15:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:44.356 23:15:49 -- common/autotest_common.sh@10 -- # set +x 00:16:44.356 23:15:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.356 23:15:50 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:44.356 23:15:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.356 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.356 [2024-11-02 23:15:50.054412] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1026a60/0x102af50) succeed. 00:16:44.356 [2024-11-02 23:15:50.063506] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1027f60/0x106c5f0) succeed. 00:16:44.614 23:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:44.615 23:15:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.615 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 23:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:44.615 23:15:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.615 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 [2024-11-02 23:15:50.152284] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:44.615 23:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:44.615 23:15:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.615 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 NULL1 00:16:44.615 23:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:44.615 23:15:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.615 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 Delay0 00:16:44.615 23:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:44.615 23:15:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.615 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 23:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@28 -- # perf_pid=594963 00:16:44.615 23:15:50 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:44.615 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.615 [2024-11-02 23:15:50.245045] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:46.518 23:15:52 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.518 23:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:46.518 23:15:52 -- common/autotest_common.sh@10 -- # set +x 00:16:47.897 NVMe io qpair process completion error 00:16:47.897 NVMe io qpair process completion error 00:16:47.897 NVMe io qpair process completion error 00:16:47.897 NVMe io qpair process completion error 00:16:47.897 NVMe io qpair process completion error 00:16:47.897 NVMe io qpair process completion error 00:16:47.897 23:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.897 23:15:53 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:47.897 23:15:53 -- target/delete_subsystem.sh@35 -- # kill -0 594963 00:16:47.897 23:15:53 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:48.156 23:15:53 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:48.156 23:15:53 -- target/delete_subsystem.sh@35 -- # kill -0 594963 00:16:48.156 23:15:53 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Write completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.725 Read completed with error (sct=0, sc=8) 00:16:48.725 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 starting I/O failed: -6 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Read completed with error (sct=0, sc=8) 00:16:48.726 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Read completed with error (sct=0, sc=8) 00:16:48.727 Write completed with error (sct=0, sc=8) 00:16:48.727 23:15:54 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:48.727 23:15:54 -- target/delete_subsystem.sh@35 -- # kill -0 594963 00:16:48.727 23:15:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:48.727 [2024-11-02 23:15:54.343019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:48.727 [2024-11-02 23:15:54.343058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:48.727 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:48.727 Initializing NVMe Controllers 00:16:48.727 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:48.727 Controller IO queue size 128, less than required. 00:16:48.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:48.727 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:48.727 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:48.727 Initialization complete. Launching workers. 00:16:48.727 ======================================================== 00:16:48.727 Latency(us) 00:16:48.727 Device Information : IOPS MiB/s Average min max 00:16:48.727 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.47 0.04 1594107.00 1000121.07 2976913.71 00:16:48.727 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.47 0.04 1595647.04 1001047.57 2978452.14 00:16:48.727 ======================================================== 00:16:48.727 Total : 160.94 0.08 1594877.02 1000121.07 2978452.14 00:16:48.727 00:16:49.295 23:15:54 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:49.295 23:15:54 -- target/delete_subsystem.sh@35 -- # kill -0 594963 00:16:49.295 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (594963) - No such process 00:16:49.295 23:15:54 -- target/delete_subsystem.sh@45 -- # NOT wait 594963 00:16:49.295 23:15:54 -- common/autotest_common.sh@640 -- # local es=0 00:16:49.295 23:15:54 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 594963 00:16:49.295 23:15:54 -- common/autotest_common.sh@628 -- # local arg=wait 00:16:49.295 23:15:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.295 23:15:54 -- common/autotest_common.sh@632 -- # type -t wait 00:16:49.295 23:15:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.295 23:15:54 -- common/autotest_common.sh@643 -- # wait 594963 00:16:49.295 23:15:54 -- common/autotest_common.sh@643 -- # es=1 00:16:49.295 23:15:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:49.295 23:15:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:49.295 23:15:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:49.295 23:15:54 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:49.295 23:15:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.295 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.295 23:15:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.295 23:15:54 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:49.295 23:15:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.295 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.295 [2024-11-02 23:15:54.862799] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:49.296 23:15:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.296 23:15:54 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.296 23:15:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.296 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.296 23:15:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.296 23:15:54 -- target/delete_subsystem.sh@54 -- # perf_pid=595756 00:16:49.296 23:15:54 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:49.296 23:15:54 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:49.296 23:15:54 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:49.296 23:15:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.296 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.296 [2024-11-02 23:15:54.949208] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:49.864 23:15:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.864 23:15:55 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:49.864 23:15:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:50.432 23:15:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:50.432 23:15:55 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:50.432 23:15:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:50.692 23:15:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:50.692 23:15:56 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:50.692 23:15:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:51.261 23:15:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:51.261 23:15:56 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:51.261 23:15:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:51.829 23:15:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:51.829 23:15:57 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:51.829 23:15:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:52.397 23:15:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:52.397 23:15:57 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:52.397 23:15:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:52.966 23:15:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:52.966 23:15:58 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:52.966 23:15:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.225 23:15:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:53.225 23:15:58 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:53.225 23:15:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.793 23:15:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:53.793 23:15:59 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:53.793 23:15:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.370 23:15:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.370 23:15:59 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:54.370 23:15:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.939 23:16:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.939 23:16:00 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:54.939 23:16:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:55.198 23:16:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:55.198 23:16:00 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:55.198 23:16:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:55.765 23:16:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:55.765 23:16:01 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:55.765 23:16:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:56.334 23:16:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:56.334 23:16:01 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:56.334 23:16:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:56.594 Initializing NVMe Controllers 00:16:56.594 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:56.594 Controller IO queue size 128, less than required. 00:16:56.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:56.594 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:56.594 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:56.594 Initialization complete. Launching workers. 00:16:56.594 ======================================================== 00:16:56.594 Latency(us) 00:16:56.594 Device Information : IOPS MiB/s Average min max 00:16:56.594 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001283.64 1000061.98 1004014.73 00:16:56.594 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002674.74 1000446.09 1005952.07 00:16:56.594 ======================================================== 00:16:56.594 Total : 256.00 0.12 1001979.19 1000061.98 1005952.07 00:16:56.594 00:16:56.853 23:16:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:56.853 23:16:02 -- target/delete_subsystem.sh@57 -- # kill -0 595756 00:16:56.853 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (595756) - No such process 00:16:56.853 23:16:02 -- target/delete_subsystem.sh@67 -- # wait 595756 00:16:56.853 23:16:02 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:56.853 23:16:02 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:56.853 23:16:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.853 23:16:02 -- nvmf/common.sh@116 -- # sync 00:16:56.853 23:16:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:56.853 23:16:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:56.853 23:16:02 -- nvmf/common.sh@119 -- # set +e 00:16:56.853 23:16:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.853 23:16:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:56.853 rmmod nvme_rdma 00:16:56.853 rmmod nvme_fabrics 00:16:56.853 23:16:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:56.853 23:16:02 -- nvmf/common.sh@123 -- # set -e 00:16:56.853 23:16:02 -- nvmf/common.sh@124 -- # return 0 00:16:56.853 23:16:02 -- nvmf/common.sh@477 -- # '[' -n 594880 ']' 00:16:56.853 23:16:02 -- nvmf/common.sh@478 -- # killprocess 594880 00:16:56.853 23:16:02 -- common/autotest_common.sh@926 -- # '[' -z 594880 ']' 00:16:56.853 23:16:02 -- common/autotest_common.sh@930 -- # kill -0 594880 00:16:56.853 23:16:02 -- common/autotest_common.sh@931 -- # uname 00:16:56.853 23:16:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.853 23:16:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 594880 00:16:56.853 23:16:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:56.853 23:16:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:56.853 23:16:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 594880' 00:16:56.853 killing process with pid 594880 00:16:56.853 23:16:02 -- common/autotest_common.sh@945 -- # kill 594880 00:16:56.853 23:16:02 -- common/autotest_common.sh@950 -- # wait 594880 00:16:57.113 23:16:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:57.113 23:16:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:57.113 00:16:57.113 real 0m20.577s 00:16:57.113 user 0m50.169s 00:16:57.113 sys 0m6.341s 00:16:57.113 23:16:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.113 23:16:02 -- common/autotest_common.sh@10 -- # set +x 00:16:57.113 ************************************ 00:16:57.113 END TEST nvmf_delete_subsystem 00:16:57.113 ************************************ 00:16:57.113 23:16:02 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:16:57.113 23:16:02 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:57.113 23:16:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:57.113 23:16:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:57.113 23:16:02 -- common/autotest_common.sh@10 -- # set +x 00:16:57.372 ************************************ 00:16:57.372 START TEST nvmf_nvme_cli 00:16:57.372 ************************************ 00:16:57.372 23:16:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:57.372 * Looking for test storage... 00:16:57.372 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:57.372 23:16:02 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.372 23:16:02 -- nvmf/common.sh@7 -- # uname -s 00:16:57.372 23:16:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.372 23:16:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.372 23:16:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.372 23:16:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.372 23:16:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.372 23:16:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.372 23:16:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.372 23:16:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.372 23:16:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.372 23:16:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.372 23:16:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:57.372 23:16:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:57.372 23:16:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.372 23:16:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.372 23:16:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.372 23:16:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:57.372 23:16:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.372 23:16:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.372 23:16:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.372 23:16:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.372 23:16:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.372 23:16:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.372 23:16:02 -- paths/export.sh@5 -- # export PATH 00:16:57.372 23:16:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.372 23:16:02 -- nvmf/common.sh@46 -- # : 0 00:16:57.372 23:16:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:57.372 23:16:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:57.372 23:16:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:57.372 23:16:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.372 23:16:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.372 23:16:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:57.372 23:16:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:57.372 23:16:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:57.372 23:16:03 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.372 23:16:03 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.372 23:16:03 -- target/nvme_cli.sh@14 -- # devs=() 00:16:57.372 23:16:03 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:57.372 23:16:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:57.372 23:16:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.372 23:16:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:57.372 23:16:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:57.372 23:16:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:57.372 23:16:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.372 23:16:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.372 23:16:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.372 23:16:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:57.372 23:16:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:57.372 23:16:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:57.372 23:16:03 -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 23:16:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:03.944 23:16:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:03.944 23:16:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:03.944 23:16:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:03.944 23:16:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:03.944 23:16:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:03.944 23:16:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:03.944 23:16:09 -- nvmf/common.sh@294 -- # net_devs=() 00:17:03.944 23:16:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:03.944 23:16:09 -- nvmf/common.sh@295 -- # e810=() 00:17:03.944 23:16:09 -- nvmf/common.sh@295 -- # local -ga e810 00:17:03.944 23:16:09 -- nvmf/common.sh@296 -- # x722=() 00:17:03.944 23:16:09 -- nvmf/common.sh@296 -- # local -ga x722 00:17:03.944 23:16:09 -- nvmf/common.sh@297 -- # mlx=() 00:17:03.944 23:16:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:03.944 23:16:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.944 23:16:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:03.944 23:16:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:03.944 23:16:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:03.944 23:16:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:03.944 23:16:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:03.944 23:16:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:03.944 23:16:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:03.944 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:03.944 23:16:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:03.944 23:16:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:03.944 23:16:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:03.944 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:03.944 23:16:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:03.944 23:16:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:03.945 23:16:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:03.945 23:16:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.945 23:16:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:03.945 23:16:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.945 23:16:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:03.945 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.945 23:16:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.945 23:16:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:03.945 23:16:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.945 23:16:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:03.945 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.945 23:16:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:03.945 23:16:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:03.945 23:16:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:03.945 23:16:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:03.945 23:16:09 -- nvmf/common.sh@57 -- # uname 00:17:03.945 23:16:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:03.945 23:16:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:03.945 23:16:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:03.945 23:16:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:03.945 23:16:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:03.945 23:16:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:03.945 23:16:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:03.945 23:16:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:03.945 23:16:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:03.945 23:16:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:03.945 23:16:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:03.945 23:16:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:03.945 23:16:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:03.945 23:16:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:03.945 23:16:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:03.945 23:16:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:03.945 23:16:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@104 -- # continue 2 00:17:03.945 23:16:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@104 -- # continue 2 00:17:03.945 23:16:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:03.945 23:16:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:03.945 23:16:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:03.945 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:03.945 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:03.945 altname enp217s0f0np0 00:17:03.945 altname ens818f0np0 00:17:03.945 inet 192.168.100.8/24 scope global mlx_0_0 00:17:03.945 valid_lft forever preferred_lft forever 00:17:03.945 23:16:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:03.945 23:16:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:03.945 23:16:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:03.945 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:03.945 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:03.945 altname enp217s0f1np1 00:17:03.945 altname ens818f1np1 00:17:03.945 inet 192.168.100.9/24 scope global mlx_0_1 00:17:03.945 valid_lft forever preferred_lft forever 00:17:03.945 23:16:09 -- nvmf/common.sh@410 -- # return 0 00:17:03.945 23:16:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:03.945 23:16:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:03.945 23:16:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:03.945 23:16:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:03.945 23:16:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:03.945 23:16:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:03.945 23:16:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:03.945 23:16:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:03.945 23:16:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:03.945 23:16:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@104 -- # continue 2 00:17:03.945 23:16:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.945 23:16:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:03.945 23:16:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@104 -- # continue 2 00:17:03.945 23:16:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:03.945 23:16:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.945 23:16:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:03.945 23:16:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:03.945 23:16:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:03.945 23:16:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:03.945 192.168.100.9' 00:17:03.945 23:16:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:03.945 192.168.100.9' 00:17:03.945 23:16:09 -- nvmf/common.sh@445 -- # head -n 1 00:17:03.945 23:16:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:03.945 23:16:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:03.945 192.168.100.9' 00:17:03.945 23:16:09 -- nvmf/common.sh@446 -- # tail -n +2 00:17:03.945 23:16:09 -- nvmf/common.sh@446 -- # head -n 1 00:17:03.945 23:16:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:03.945 23:16:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:03.945 23:16:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:03.945 23:16:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:03.945 23:16:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:03.945 23:16:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:03.945 23:16:09 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:03.945 23:16:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:03.945 23:16:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:03.945 23:16:09 -- common/autotest_common.sh@10 -- # set +x 00:17:03.945 23:16:09 -- nvmf/common.sh@469 -- # nvmfpid=600494 00:17:03.945 23:16:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.945 23:16:09 -- nvmf/common.sh@470 -- # waitforlisten 600494 00:17:03.945 23:16:09 -- common/autotest_common.sh@819 -- # '[' -z 600494 ']' 00:17:03.945 23:16:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.945 23:16:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.945 23:16:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.945 23:16:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.945 23:16:09 -- common/autotest_common.sh@10 -- # set +x 00:17:03.945 [2024-11-02 23:16:09.626714] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:03.945 [2024-11-02 23:16:09.626761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.945 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.945 [2024-11-02 23:16:09.695301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.205 [2024-11-02 23:16:09.769935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.205 [2024-11-02 23:16:09.770047] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.205 [2024-11-02 23:16:09.770058] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.205 [2024-11-02 23:16:09.770066] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.205 [2024-11-02 23:16:09.770112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.205 [2024-11-02 23:16:09.770219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.205 [2024-11-02 23:16:09.770303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.205 [2024-11-02 23:16:09.770305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.772 23:16:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.772 23:16:10 -- common/autotest_common.sh@852 -- # return 0 00:17:04.772 23:16:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:04.772 23:16:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:04.772 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:04.772 23:16:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.772 23:16:10 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:04.772 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.772 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:04.772 [2024-11-02 23:16:10.523338] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x89d090/0x8a1580) succeed. 00:17:05.031 [2024-11-02 23:16:10.532580] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x89e680/0x8e2c20) succeed. 00:17:05.031 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.032 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.032 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.032 Malloc0 00:17:05.032 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:05.032 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.032 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.032 Malloc1 00:17:05.032 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:05.032 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.032 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.032 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.032 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.032 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.032 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:05.032 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.032 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.032 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:05.032 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.032 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.032 [2024-11-02 23:16:10.728757] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:05.032 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:05.032 23:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.032 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.032 23:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.032 23:16:10 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:05.291 00:17:05.291 Discovery Log Number of Records 2, Generation counter 2 00:17:05.291 =====Discovery Log Entry 0====== 00:17:05.291 trtype: rdma 00:17:05.291 adrfam: ipv4 00:17:05.291 subtype: current discovery subsystem 00:17:05.291 treq: not required 00:17:05.291 portid: 0 00:17:05.291 trsvcid: 4420 00:17:05.291 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:05.291 traddr: 192.168.100.8 00:17:05.291 eflags: explicit discovery connections, duplicate discovery information 00:17:05.291 rdma_prtype: not specified 00:17:05.291 rdma_qptype: connected 00:17:05.291 rdma_cms: rdma-cm 00:17:05.291 rdma_pkey: 0x0000 00:17:05.291 =====Discovery Log Entry 1====== 00:17:05.291 trtype: rdma 00:17:05.291 adrfam: ipv4 00:17:05.291 subtype: nvme subsystem 00:17:05.291 treq: not required 00:17:05.291 portid: 0 00:17:05.291 trsvcid: 4420 00:17:05.291 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:05.291 traddr: 192.168.100.8 00:17:05.291 eflags: none 00:17:05.291 rdma_prtype: not specified 00:17:05.291 rdma_qptype: connected 00:17:05.291 rdma_cms: rdma-cm 00:17:05.291 rdma_pkey: 0x0000 00:17:05.291 23:16:10 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:05.291 23:16:10 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:05.291 23:16:10 -- nvmf/common.sh@510 -- # local dev _ 00:17:05.291 23:16:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:05.291 23:16:10 -- nvmf/common.sh@509 -- # nvme list 00:17:05.291 23:16:10 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:05.291 23:16:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:05.291 23:16:10 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:05.291 23:16:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:05.291 23:16:10 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:05.291 23:16:10 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:06.229 23:16:11 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:06.229 23:16:11 -- common/autotest_common.sh@1177 -- # local i=0 00:17:06.229 23:16:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.229 23:16:11 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:17:06.229 23:16:11 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:17:06.229 23:16:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:08.135 23:16:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:08.135 23:16:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:08.135 23:16:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.135 23:16:13 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:17:08.135 23:16:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.135 23:16:13 -- common/autotest_common.sh@1187 -- # return 0 00:17:08.135 23:16:13 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:08.135 23:16:13 -- nvmf/common.sh@510 -- # local dev _ 00:17:08.135 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.135 23:16:13 -- nvmf/common.sh@509 -- # nvme list 00:17:08.135 23:16:13 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:08.135 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.135 23:16:13 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:08.135 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.135 23:16:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:08.135 23:16:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:08.135 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.135 23:16:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:08.135 23:16:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:08.135 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.135 23:16:13 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:08.135 /dev/nvme0n2 ]] 00:17:08.135 23:16:13 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:08.135 23:16:13 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:08.135 23:16:13 -- nvmf/common.sh@510 -- # local dev _ 00:17:08.394 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.394 23:16:13 -- nvmf/common.sh@509 -- # nvme list 00:17:08.394 23:16:13 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:08.394 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.394 23:16:13 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:08.394 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.394 23:16:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:08.394 23:16:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:08.394 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.394 23:16:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:08.394 23:16:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:08.394 23:16:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:08.394 23:16:13 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:08.394 23:16:13 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.332 23:16:14 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.332 23:16:14 -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.332 23:16:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:09.332 23:16:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.332 23:16:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:09.332 23:16:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.332 23:16:14 -- common/autotest_common.sh@1210 -- # return 0 00:17:09.332 23:16:14 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:09.332 23:16:14 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.332 23:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.332 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.332 23:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.332 23:16:14 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:09.332 23:16:14 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:09.332 23:16:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:09.332 23:16:14 -- nvmf/common.sh@116 -- # sync 00:17:09.332 23:16:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:09.332 23:16:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:09.332 23:16:14 -- nvmf/common.sh@119 -- # set +e 00:17:09.332 23:16:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:09.332 23:16:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:09.332 rmmod nvme_rdma 00:17:09.332 rmmod nvme_fabrics 00:17:09.332 23:16:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:09.332 23:16:14 -- nvmf/common.sh@123 -- # set -e 00:17:09.332 23:16:14 -- nvmf/common.sh@124 -- # return 0 00:17:09.332 23:16:14 -- nvmf/common.sh@477 -- # '[' -n 600494 ']' 00:17:09.332 23:16:14 -- nvmf/common.sh@478 -- # killprocess 600494 00:17:09.332 23:16:14 -- common/autotest_common.sh@926 -- # '[' -z 600494 ']' 00:17:09.332 23:16:14 -- common/autotest_common.sh@930 -- # kill -0 600494 00:17:09.332 23:16:14 -- common/autotest_common.sh@931 -- # uname 00:17:09.332 23:16:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:09.332 23:16:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 600494 00:17:09.332 23:16:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:09.332 23:16:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:09.332 23:16:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 600494' 00:17:09.332 killing process with pid 600494 00:17:09.332 23:16:15 -- common/autotest_common.sh@945 -- # kill 600494 00:17:09.332 23:16:15 -- common/autotest_common.sh@950 -- # wait 600494 00:17:09.902 23:16:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:09.902 23:16:15 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:09.902 00:17:09.902 real 0m12.495s 00:17:09.902 user 0m24.030s 00:17:09.902 sys 0m5.645s 00:17:09.902 23:16:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.902 23:16:15 -- common/autotest_common.sh@10 -- # set +x 00:17:09.902 ************************************ 00:17:09.902 END TEST nvmf_nvme_cli 00:17:09.902 ************************************ 00:17:09.902 23:16:15 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:09.902 23:16:15 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:09.902 23:16:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:09.902 23:16:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:09.902 23:16:15 -- common/autotest_common.sh@10 -- # set +x 00:17:09.902 ************************************ 00:17:09.902 START TEST nvmf_host_management 00:17:09.902 ************************************ 00:17:09.902 23:16:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:09.902 * Looking for test storage... 00:17:09.902 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:09.902 23:16:15 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.902 23:16:15 -- nvmf/common.sh@7 -- # uname -s 00:17:09.902 23:16:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.902 23:16:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.902 23:16:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.902 23:16:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.902 23:16:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.902 23:16:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.902 23:16:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.902 23:16:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.902 23:16:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.902 23:16:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.902 23:16:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:09.902 23:16:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:09.902 23:16:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.902 23:16:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.902 23:16:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.902 23:16:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:09.902 23:16:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.902 23:16:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.902 23:16:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.902 23:16:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.902 23:16:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.902 23:16:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.902 23:16:15 -- paths/export.sh@5 -- # export PATH 00:17:09.902 23:16:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.902 23:16:15 -- nvmf/common.sh@46 -- # : 0 00:17:09.902 23:16:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:09.902 23:16:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:09.902 23:16:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:09.902 23:16:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.902 23:16:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.902 23:16:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:09.902 23:16:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:09.902 23:16:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:09.902 23:16:15 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.902 23:16:15 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.902 23:16:15 -- target/host_management.sh@104 -- # nvmftestinit 00:17:09.902 23:16:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:09.902 23:16:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.902 23:16:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:09.902 23:16:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:09.902 23:16:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:09.902 23:16:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.902 23:16:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.902 23:16:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.902 23:16:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:09.902 23:16:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:09.902 23:16:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:09.902 23:16:15 -- common/autotest_common.sh@10 -- # set +x 00:17:16.486 23:16:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:16.486 23:16:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:16.486 23:16:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:16.486 23:16:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:16.486 23:16:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:16.486 23:16:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:16.486 23:16:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:16.486 23:16:22 -- nvmf/common.sh@294 -- # net_devs=() 00:17:16.486 23:16:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:16.486 23:16:22 -- nvmf/common.sh@295 -- # e810=() 00:17:16.486 23:16:22 -- nvmf/common.sh@295 -- # local -ga e810 00:17:16.486 23:16:22 -- nvmf/common.sh@296 -- # x722=() 00:17:16.486 23:16:22 -- nvmf/common.sh@296 -- # local -ga x722 00:17:16.486 23:16:22 -- nvmf/common.sh@297 -- # mlx=() 00:17:16.486 23:16:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:16.486 23:16:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.486 23:16:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:16.486 23:16:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:16.486 23:16:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:16.486 23:16:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:16.486 23:16:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:16.486 23:16:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:16.486 23:16:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:16.486 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:16.486 23:16:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.486 23:16:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:16.486 23:16:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:16.486 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:16.486 23:16:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.486 23:16:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:16.486 23:16:22 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:16.486 23:16:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:16.486 23:16:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.486 23:16:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:16.486 23:16:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.486 23:16:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:16.486 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.487 23:16:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.487 23:16:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:16.487 23:16:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.487 23:16:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:16.487 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.487 23:16:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:16.487 23:16:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:16.487 23:16:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:16.487 23:16:22 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:16.487 23:16:22 -- nvmf/common.sh@57 -- # uname 00:17:16.487 23:16:22 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:16.487 23:16:22 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:16.487 23:16:22 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:16.487 23:16:22 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:16.487 23:16:22 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:16.487 23:16:22 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:16.487 23:16:22 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:16.487 23:16:22 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:16.487 23:16:22 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:16.487 23:16:22 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:16.487 23:16:22 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:16.487 23:16:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:16.487 23:16:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:16.487 23:16:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:16.487 23:16:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:16.487 23:16:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:16.487 23:16:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@104 -- # continue 2 00:17:16.487 23:16:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@104 -- # continue 2 00:17:16.487 23:16:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:16.487 23:16:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:16.487 23:16:22 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:16.487 23:16:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:16.487 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:16.487 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:16.487 altname enp217s0f0np0 00:17:16.487 altname ens818f0np0 00:17:16.487 inet 192.168.100.8/24 scope global mlx_0_0 00:17:16.487 valid_lft forever preferred_lft forever 00:17:16.487 23:16:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:16.487 23:16:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:16.487 23:16:22 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:16.487 23:16:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:16.487 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:16.487 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:16.487 altname enp217s0f1np1 00:17:16.487 altname ens818f1np1 00:17:16.487 inet 192.168.100.9/24 scope global mlx_0_1 00:17:16.487 valid_lft forever preferred_lft forever 00:17:16.487 23:16:22 -- nvmf/common.sh@410 -- # return 0 00:17:16.487 23:16:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:16.487 23:16:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:16.487 23:16:22 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:16.487 23:16:22 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:16.487 23:16:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:16.487 23:16:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:16.487 23:16:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:16.487 23:16:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:16.487 23:16:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:16.487 23:16:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@104 -- # continue 2 00:17:16.487 23:16:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.487 23:16:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:16.487 23:16:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@104 -- # continue 2 00:17:16.487 23:16:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:16.487 23:16:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:16.487 23:16:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:16.487 23:16:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:16.487 23:16:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:16.487 23:16:22 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:16.487 192.168.100.9' 00:17:16.487 23:16:22 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:16.487 192.168.100.9' 00:17:16.487 23:16:22 -- nvmf/common.sh@445 -- # head -n 1 00:17:16.487 23:16:22 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:16.487 23:16:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:16.487 192.168.100.9' 00:17:16.487 23:16:22 -- nvmf/common.sh@446 -- # tail -n +2 00:17:16.487 23:16:22 -- nvmf/common.sh@446 -- # head -n 1 00:17:16.747 23:16:22 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:16.747 23:16:22 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:16.747 23:16:22 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:16.747 23:16:22 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:16.747 23:16:22 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:16.747 23:16:22 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:16.747 23:16:22 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:16.747 23:16:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:16.747 23:16:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:16.747 23:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:16.747 ************************************ 00:17:16.747 START TEST nvmf_host_management 00:17:16.747 ************************************ 00:17:16.747 23:16:22 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:16.747 23:16:22 -- target/host_management.sh@69 -- # starttarget 00:17:16.747 23:16:22 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:16.747 23:16:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:16.747 23:16:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:16.747 23:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:16.747 23:16:22 -- nvmf/common.sh@469 -- # nvmfpid=604817 00:17:16.747 23:16:22 -- nvmf/common.sh@470 -- # waitforlisten 604817 00:17:16.747 23:16:22 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:16.747 23:16:22 -- common/autotest_common.sh@819 -- # '[' -z 604817 ']' 00:17:16.747 23:16:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.747 23:16:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:16.747 23:16:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.747 23:16:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:16.747 23:16:22 -- common/autotest_common.sh@10 -- # set +x 00:17:16.747 [2024-11-02 23:16:22.323805] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:16.747 [2024-11-02 23:16:22.323857] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.747 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.747 [2024-11-02 23:16:22.392720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.747 [2024-11-02 23:16:22.461478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:16.747 [2024-11-02 23:16:22.461609] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.747 [2024-11-02 23:16:22.461619] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.747 [2024-11-02 23:16:22.461627] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.747 [2024-11-02 23:16:22.461748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.747 [2024-11-02 23:16:22.461818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.747 [2024-11-02 23:16:22.461907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.747 [2024-11-02 23:16:22.461908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:17.685 23:16:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:17.685 23:16:23 -- common/autotest_common.sh@852 -- # return 0 00:17:17.685 23:16:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:17.685 23:16:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:17.685 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.685 23:16:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.685 23:16:23 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:17.685 23:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.685 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.685 [2024-11-02 23:16:23.211029] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21b5380/0x21b9870) succeed. 00:17:17.685 [2024-11-02 23:16:23.220234] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21b6970/0x21faf10) succeed. 00:17:17.685 23:16:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.685 23:16:23 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:17.685 23:16:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:17.685 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.686 23:16:23 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:17.686 23:16:23 -- target/host_management.sh@23 -- # cat 00:17:17.686 23:16:23 -- target/host_management.sh@30 -- # rpc_cmd 00:17:17.686 23:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.686 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.686 Malloc0 00:17:17.686 [2024-11-02 23:16:23.398029] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:17.686 23:16:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.686 23:16:23 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:17.686 23:16:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:17.686 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.945 23:16:23 -- target/host_management.sh@73 -- # perfpid=604958 00:17:17.945 23:16:23 -- target/host_management.sh@74 -- # waitforlisten 604958 /var/tmp/bdevperf.sock 00:17:17.945 23:16:23 -- common/autotest_common.sh@819 -- # '[' -z 604958 ']' 00:17:17.945 23:16:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.945 23:16:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.945 23:16:23 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:17.945 23:16:23 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:17.945 23:16:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.945 23:16:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.945 23:16:23 -- nvmf/common.sh@520 -- # config=() 00:17:17.945 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.945 23:16:23 -- nvmf/common.sh@520 -- # local subsystem config 00:17:17.945 23:16:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:17.945 23:16:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:17.945 { 00:17:17.945 "params": { 00:17:17.945 "name": "Nvme$subsystem", 00:17:17.945 "trtype": "$TEST_TRANSPORT", 00:17:17.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.945 "adrfam": "ipv4", 00:17:17.945 "trsvcid": "$NVMF_PORT", 00:17:17.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.945 "hdgst": ${hdgst:-false}, 00:17:17.945 "ddgst": ${ddgst:-false} 00:17:17.945 }, 00:17:17.945 "method": "bdev_nvme_attach_controller" 00:17:17.945 } 00:17:17.945 EOF 00:17:17.945 )") 00:17:17.945 23:16:23 -- nvmf/common.sh@542 -- # cat 00:17:17.945 23:16:23 -- nvmf/common.sh@544 -- # jq . 00:17:17.945 23:16:23 -- nvmf/common.sh@545 -- # IFS=, 00:17:17.945 23:16:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:17.945 "params": { 00:17:17.945 "name": "Nvme0", 00:17:17.945 "trtype": "rdma", 00:17:17.945 "traddr": "192.168.100.8", 00:17:17.945 "adrfam": "ipv4", 00:17:17.945 "trsvcid": "4420", 00:17:17.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:17.945 "hdgst": false, 00:17:17.945 "ddgst": false 00:17:17.945 }, 00:17:17.945 "method": "bdev_nvme_attach_controller" 00:17:17.945 }' 00:17:17.945 [2024-11-02 23:16:23.495806] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:17.945 [2024-11-02 23:16:23.495857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604958 ] 00:17:17.945 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.945 [2024-11-02 23:16:23.566395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.945 [2024-11-02 23:16:23.634220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.205 Running I/O for 10 seconds... 00:17:18.773 23:16:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.773 23:16:24 -- common/autotest_common.sh@852 -- # return 0 00:17:18.773 23:16:24 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:18.773 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.773 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 23:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.773 23:16:24 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.773 23:16:24 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:18.773 23:16:24 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:18.773 23:16:24 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:18.773 23:16:24 -- target/host_management.sh@52 -- # local ret=1 00:17:18.773 23:16:24 -- target/host_management.sh@53 -- # local i 00:17:18.773 23:16:24 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:18.773 23:16:24 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:18.773 23:16:24 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:18.773 23:16:24 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:18.773 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.773 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 23:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.773 23:16:24 -- target/host_management.sh@55 -- # read_io_count=3029 00:17:18.773 23:16:24 -- target/host_management.sh@58 -- # '[' 3029 -ge 100 ']' 00:17:18.773 23:16:24 -- target/host_management.sh@59 -- # ret=0 00:17:18.773 23:16:24 -- target/host_management.sh@60 -- # break 00:17:18.773 23:16:24 -- target/host_management.sh@64 -- # return 0 00:17:18.773 23:16:24 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:18.773 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.773 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 23:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.773 23:16:24 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:18.773 23:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.773 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 23:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.773 23:16:24 -- target/host_management.sh@87 -- # sleep 1 00:17:19.713 [2024-11-02 23:16:25.404610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.404648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:17:19.713 [2024-11-02 23:16:25.404698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.404744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:19.713 [2024-11-02 23:16:25.404784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:17:19.713 [2024-11-02 23:16:25.404823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.404842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:19.713 [2024-11-02 23:16:25.404900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:17:19.713 [2024-11-02 23:16:25.404939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:19.713 [2024-11-02 23:16:25.404959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.404984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.404997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:17:19.713 [2024-11-02 23:16:25.405006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.405025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:17:19.713 [2024-11-02 23:16:25.405045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:19.713 [2024-11-02 23:16:25.405064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:17:19.713 [2024-11-02 23:16:25.405084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:19.713 [2024-11-02 23:16:25.405103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182000 00:17:19.713 [2024-11-02 23:16:25.405123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:17:19.713 [2024-11-02 23:16:25.405143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:19.713 [2024-11-02 23:16:25.405162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.405181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.405202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:17:19.713 [2024-11-02 23:16:25.405221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.713 [2024-11-02 23:16:25.405232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:19.713 [2024-11-02 23:16:25.405241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:17:19.714 [2024-11-02 23:16:25.405260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:17:19.714 [2024-11-02 23:16:25.405279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:17:19.714 [2024-11-02 23:16:25.405299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:19.714 [2024-11-02 23:16:25.405320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:17:19.714 [2024-11-02 23:16:25.405340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:17:19.714 [2024-11-02 23:16:25.405360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:17:19.714 [2024-11-02 23:16:25.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:17:19.714 [2024-11-02 23:16:25.405399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:17:19.714 [2024-11-02 23:16:25.405423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:17:19.714 [2024-11-02 23:16:25.405442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:19.714 [2024-11-02 23:16:25.405462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:17:19.714 [2024-11-02 23:16:25.405481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:17:19.714 [2024-11-02 23:16:25.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:17:19.714 [2024-11-02 23:16:25.405519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c357000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c378000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c399000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c147000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.405898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x182300 00:17:19.714 [2024-11-02 23:16:25.405907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c250c000 sqhd:5310 p:0 m:0 dnr:0 00:17:19.714 [2024-11-02 23:16:25.407806] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:19.714 [2024-11-02 23:16:25.408683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:19.714 task offset: 30976 on job bdev=Nvme0n1 fails 00:17:19.714 00:17:19.714 Latency(us) 00:17:19.714 [2024-11-02T22:16:25.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.714 [2024-11-02T22:16:25.471Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:19.714 [2024-11-02T22:16:25.471Z] Job: Nvme0n1 ended in about 1.60 seconds with error 00:17:19.715 Verification LBA range: start 0x0 length 0x400 00:17:19.715 Nvme0n1 : 1.60 2064.78 129.05 40.12 0.00 30217.17 2896.69 1013343.85 00:17:19.715 [2024-11-02T22:16:25.472Z] =================================================================================================================== 00:17:19.715 [2024-11-02T22:16:25.472Z] Total : 2064.78 129.05 40.12 0.00 30217.17 2896.69 1013343.85 00:17:19.715 [2024-11-02 23:16:25.410367] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:19.715 23:16:25 -- target/host_management.sh@91 -- # kill -9 604958 00:17:19.715 23:16:25 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:19.715 23:16:25 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:19.715 23:16:25 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:19.715 23:16:25 -- nvmf/common.sh@520 -- # config=() 00:17:19.715 23:16:25 -- nvmf/common.sh@520 -- # local subsystem config 00:17:19.715 23:16:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:19.715 23:16:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:19.715 { 00:17:19.715 "params": { 00:17:19.715 "name": "Nvme$subsystem", 00:17:19.715 "trtype": "$TEST_TRANSPORT", 00:17:19.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.715 "adrfam": "ipv4", 00:17:19.715 "trsvcid": "$NVMF_PORT", 00:17:19.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.715 "hdgst": ${hdgst:-false}, 00:17:19.715 "ddgst": ${ddgst:-false} 00:17:19.715 }, 00:17:19.715 "method": "bdev_nvme_attach_controller" 00:17:19.715 } 00:17:19.715 EOF 00:17:19.715 )") 00:17:19.715 23:16:25 -- nvmf/common.sh@542 -- # cat 00:17:19.715 23:16:25 -- nvmf/common.sh@544 -- # jq . 00:17:19.715 23:16:25 -- nvmf/common.sh@545 -- # IFS=, 00:17:19.715 23:16:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:19.715 "params": { 00:17:19.715 "name": "Nvme0", 00:17:19.715 "trtype": "rdma", 00:17:19.715 "traddr": "192.168.100.8", 00:17:19.715 "adrfam": "ipv4", 00:17:19.715 "trsvcid": "4420", 00:17:19.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:19.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:19.715 "hdgst": false, 00:17:19.715 "ddgst": false 00:17:19.715 }, 00:17:19.715 "method": "bdev_nvme_attach_controller" 00:17:19.715 }' 00:17:19.715 [2024-11-02 23:16:25.466584] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:19.715 [2024-11-02 23:16:25.466636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605410 ] 00:17:19.974 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.974 [2024-11-02 23:16:25.537234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.974 [2024-11-02 23:16:25.604961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.233 Running I/O for 1 seconds... 00:17:21.170 00:17:21.170 Latency(us) 00:17:21.170 [2024-11-02T22:16:26.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.170 [2024-11-02T22:16:26.927Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:21.170 Verification LBA range: start 0x0 length 0x400 00:17:21.171 Nvme0n1 : 1.01 5618.61 351.16 0.00 0.00 11218.38 1042.02 24117.25 00:17:21.171 [2024-11-02T22:16:26.928Z] =================================================================================================================== 00:17:21.171 [2024-11-02T22:16:26.928Z] Total : 5618.61 351.16 0.00 0.00 11218.38 1042.02 24117.25 00:17:21.430 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 604958 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:21.430 23:16:27 -- target/host_management.sh@101 -- # stoptarget 00:17:21.430 23:16:27 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:21.430 23:16:27 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:21.430 23:16:27 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:21.430 23:16:27 -- target/host_management.sh@40 -- # nvmftestfini 00:17:21.430 23:16:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:21.430 23:16:27 -- nvmf/common.sh@116 -- # sync 00:17:21.430 23:16:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:21.430 23:16:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:21.430 23:16:27 -- nvmf/common.sh@119 -- # set +e 00:17:21.430 23:16:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:21.430 23:16:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:21.430 rmmod nvme_rdma 00:17:21.430 rmmod nvme_fabrics 00:17:21.430 23:16:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:21.430 23:16:27 -- nvmf/common.sh@123 -- # set -e 00:17:21.430 23:16:27 -- nvmf/common.sh@124 -- # return 0 00:17:21.430 23:16:27 -- nvmf/common.sh@477 -- # '[' -n 604817 ']' 00:17:21.430 23:16:27 -- nvmf/common.sh@478 -- # killprocess 604817 00:17:21.430 23:16:27 -- common/autotest_common.sh@926 -- # '[' -z 604817 ']' 00:17:21.430 23:16:27 -- common/autotest_common.sh@930 -- # kill -0 604817 00:17:21.430 23:16:27 -- common/autotest_common.sh@931 -- # uname 00:17:21.430 23:16:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.430 23:16:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 604817 00:17:21.430 23:16:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:21.430 23:16:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:21.430 23:16:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 604817' 00:17:21.430 killing process with pid 604817 00:17:21.430 23:16:27 -- common/autotest_common.sh@945 -- # kill 604817 00:17:21.430 23:16:27 -- common/autotest_common.sh@950 -- # wait 604817 00:17:21.690 [2024-11-02 23:16:27.414339] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:21.690 23:16:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:21.690 23:16:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:21.690 00:17:21.690 real 0m5.165s 00:17:21.690 user 0m23.159s 00:17:21.690 sys 0m0.988s 00:17:21.690 23:16:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.690 23:16:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.690 ************************************ 00:17:21.690 END TEST nvmf_host_management 00:17:21.690 ************************************ 00:17:21.954 23:16:27 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:21.954 00:17:21.954 real 0m12.063s 00:17:21.954 user 0m25.120s 00:17:21.954 sys 0m6.141s 00:17:21.954 23:16:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.954 23:16:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.954 ************************************ 00:17:21.954 END TEST nvmf_host_management 00:17:21.954 ************************************ 00:17:21.954 23:16:27 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:21.954 23:16:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:21.954 23:16:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:21.954 23:16:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.954 ************************************ 00:17:21.954 START TEST nvmf_lvol 00:17:21.954 ************************************ 00:17:21.954 23:16:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:21.954 * Looking for test storage... 00:17:21.954 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:21.954 23:16:27 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.954 23:16:27 -- nvmf/common.sh@7 -- # uname -s 00:17:21.954 23:16:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.954 23:16:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.954 23:16:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.955 23:16:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.955 23:16:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.955 23:16:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.955 23:16:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.955 23:16:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.955 23:16:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.955 23:16:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.955 23:16:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:21.955 23:16:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:21.955 23:16:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.955 23:16:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.955 23:16:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.955 23:16:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:21.955 23:16:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.955 23:16:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.955 23:16:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.955 23:16:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.955 23:16:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.955 23:16:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.955 23:16:27 -- paths/export.sh@5 -- # export PATH 00:17:21.955 23:16:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.955 23:16:27 -- nvmf/common.sh@46 -- # : 0 00:17:21.955 23:16:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:21.955 23:16:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:21.955 23:16:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:21.955 23:16:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.955 23:16:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.955 23:16:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:21.955 23:16:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:21.955 23:16:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:21.955 23:16:27 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:21.955 23:16:27 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:21.955 23:16:27 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:21.955 23:16:27 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:21.955 23:16:27 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:21.955 23:16:27 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:21.955 23:16:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:21.955 23:16:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.955 23:16:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:21.955 23:16:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:21.955 23:16:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:21.955 23:16:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.955 23:16:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.955 23:16:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.955 23:16:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:21.955 23:16:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:21.955 23:16:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:21.955 23:16:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.621 23:16:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:28.621 23:16:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:28.621 23:16:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:28.621 23:16:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:28.621 23:16:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:28.621 23:16:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:28.621 23:16:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:28.621 23:16:33 -- nvmf/common.sh@294 -- # net_devs=() 00:17:28.621 23:16:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:28.621 23:16:33 -- nvmf/common.sh@295 -- # e810=() 00:17:28.621 23:16:33 -- nvmf/common.sh@295 -- # local -ga e810 00:17:28.621 23:16:33 -- nvmf/common.sh@296 -- # x722=() 00:17:28.621 23:16:33 -- nvmf/common.sh@296 -- # local -ga x722 00:17:28.621 23:16:33 -- nvmf/common.sh@297 -- # mlx=() 00:17:28.621 23:16:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:28.621 23:16:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.621 23:16:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:28.621 23:16:33 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:28.621 23:16:33 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:28.621 23:16:33 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:28.621 23:16:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:28.621 23:16:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:28.621 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:28.621 23:16:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:28.621 23:16:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:28.621 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:28.621 23:16:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:28.621 23:16:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:28.621 23:16:33 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.621 23:16:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:28.621 23:16:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.621 23:16:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:28.621 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:28.621 23:16:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.621 23:16:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.621 23:16:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:28.621 23:16:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.621 23:16:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:28.621 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:28.621 23:16:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.621 23:16:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:28.621 23:16:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:28.621 23:16:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:28.621 23:16:33 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:28.621 23:16:33 -- nvmf/common.sh@57 -- # uname 00:17:28.621 23:16:33 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:28.621 23:16:33 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:28.621 23:16:33 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:28.621 23:16:33 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:28.621 23:16:33 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:28.621 23:16:33 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:28.621 23:16:33 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:28.621 23:16:33 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:28.621 23:16:33 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:28.621 23:16:33 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:28.621 23:16:33 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:28.621 23:16:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:28.621 23:16:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:28.621 23:16:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:28.621 23:16:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:28.621 23:16:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:28.621 23:16:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:28.621 23:16:33 -- nvmf/common.sh@104 -- # continue 2 00:17:28.621 23:16:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.621 23:16:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:28.621 23:16:33 -- nvmf/common.sh@104 -- # continue 2 00:17:28.621 23:16:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:28.621 23:16:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:28.621 23:16:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:28.621 23:16:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:28.621 23:16:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:28.621 23:16:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:28.621 23:16:33 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:28.621 23:16:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:28.621 23:16:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:28.621 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:28.621 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:28.621 altname enp217s0f0np0 00:17:28.621 altname ens818f0np0 00:17:28.621 inet 192.168.100.8/24 scope global mlx_0_0 00:17:28.621 valid_lft forever preferred_lft forever 00:17:28.622 23:16:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:28.622 23:16:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:28.622 23:16:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:28.622 23:16:33 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:28.622 23:16:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:28.622 23:16:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:28.622 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:28.622 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:28.622 altname enp217s0f1np1 00:17:28.622 altname ens818f1np1 00:17:28.622 inet 192.168.100.9/24 scope global mlx_0_1 00:17:28.622 valid_lft forever preferred_lft forever 00:17:28.622 23:16:33 -- nvmf/common.sh@410 -- # return 0 00:17:28.622 23:16:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:28.622 23:16:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:28.622 23:16:33 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:28.622 23:16:33 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:28.622 23:16:33 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:28.622 23:16:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:28.622 23:16:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:28.622 23:16:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:28.622 23:16:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:28.622 23:16:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:28.622 23:16:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:28.622 23:16:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.622 23:16:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:28.622 23:16:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:28.622 23:16:33 -- nvmf/common.sh@104 -- # continue 2 00:17:28.622 23:16:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:28.622 23:16:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.622 23:16:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:28.622 23:16:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.622 23:16:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:28.622 23:16:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:28.622 23:16:33 -- nvmf/common.sh@104 -- # continue 2 00:17:28.622 23:16:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:28.622 23:16:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:28.622 23:16:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:28.622 23:16:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:28.622 23:16:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:28.622 23:16:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:28.622 23:16:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:28.622 23:16:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:28.622 192.168.100.9' 00:17:28.622 23:16:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:28.622 192.168.100.9' 00:17:28.622 23:16:33 -- nvmf/common.sh@445 -- # head -n 1 00:17:28.622 23:16:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:28.622 23:16:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:28.622 192.168.100.9' 00:17:28.622 23:16:33 -- nvmf/common.sh@446 -- # head -n 1 00:17:28.622 23:16:33 -- nvmf/common.sh@446 -- # tail -n +2 00:17:28.622 23:16:33 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:28.622 23:16:33 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:28.622 23:16:33 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:28.622 23:16:33 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:28.622 23:16:33 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:28.622 23:16:33 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:28.622 23:16:34 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:28.622 23:16:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:28.622 23:16:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:28.622 23:16:34 -- common/autotest_common.sh@10 -- # set +x 00:17:28.622 23:16:34 -- nvmf/common.sh@469 -- # nvmfpid=608888 00:17:28.622 23:16:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:28.622 23:16:34 -- nvmf/common.sh@470 -- # waitforlisten 608888 00:17:28.622 23:16:34 -- common/autotest_common.sh@819 -- # '[' -z 608888 ']' 00:17:28.622 23:16:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.622 23:16:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.622 23:16:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.622 23:16:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.622 23:16:34 -- common/autotest_common.sh@10 -- # set +x 00:17:28.622 [2024-11-02 23:16:34.061054] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:28.622 [2024-11-02 23:16:34.061108] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.622 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.622 [2024-11-02 23:16:34.131660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.622 [2024-11-02 23:16:34.198336] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:28.622 [2024-11-02 23:16:34.198453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.622 [2024-11-02 23:16:34.198462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.622 [2024-11-02 23:16:34.198470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.622 [2024-11-02 23:16:34.198565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.622 [2024-11-02 23:16:34.198658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.622 [2024-11-02 23:16:34.198661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.191 23:16:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.191 23:16:34 -- common/autotest_common.sh@852 -- # return 0 00:17:29.191 23:16:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:29.191 23:16:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:29.191 23:16:34 -- common/autotest_common.sh@10 -- # set +x 00:17:29.191 23:16:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.191 23:16:34 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:29.450 [2024-11-02 23:16:35.104363] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13e1560/0x13e5a50) succeed. 00:17:29.450 [2024-11-02 23:16:35.113460] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13e2ab0/0x14270f0) succeed. 00:17:29.709 23:16:35 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:29.709 23:16:35 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:29.709 23:16:35 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:29.969 23:16:35 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:29.969 23:16:35 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:30.228 23:16:35 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:30.488 23:16:36 -- target/nvmf_lvol.sh@29 -- # lvs=d3c93162-3880-4e22-9197-ec7b53600724 00:17:30.488 23:16:36 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3c93162-3880-4e22-9197-ec7b53600724 lvol 20 00:17:30.488 23:16:36 -- target/nvmf_lvol.sh@32 -- # lvol=7698e5fd-2f6c-4e83-8890-2b0e84c3ba06 00:17:30.488 23:16:36 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:30.747 23:16:36 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7698e5fd-2f6c-4e83-8890-2b0e84c3ba06 00:17:31.006 23:16:36 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:31.006 [2024-11-02 23:16:36.728179] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.007 23:16:36 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:31.266 23:16:36 -- target/nvmf_lvol.sh@42 -- # perf_pid=609460 00:17:31.266 23:16:36 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:31.266 23:16:36 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:31.266 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.203 23:16:37 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7698e5fd-2f6c-4e83-8890-2b0e84c3ba06 MY_SNAPSHOT 00:17:32.462 23:16:38 -- target/nvmf_lvol.sh@47 -- # snapshot=95d20ffb-5a75-4861-814a-5be6a59ed549 00:17:32.462 23:16:38 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7698e5fd-2f6c-4e83-8890-2b0e84c3ba06 30 00:17:32.722 23:16:38 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 95d20ffb-5a75-4861-814a-5be6a59ed549 MY_CLONE 00:17:32.981 23:16:38 -- target/nvmf_lvol.sh@49 -- # clone=058889c3-bf52-40f6-93fe-ee86f0169942 00:17:32.981 23:16:38 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 058889c3-bf52-40f6-93fe-ee86f0169942 00:17:32.981 23:16:38 -- target/nvmf_lvol.sh@53 -- # wait 609460 00:17:42.962 Initializing NVMe Controllers 00:17:42.962 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:42.962 Controller IO queue size 128, less than required. 00:17:42.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:42.962 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:42.962 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:42.962 Initialization complete. Launching workers. 00:17:42.962 ======================================================== 00:17:42.962 Latency(us) 00:17:42.962 Device Information : IOPS MiB/s Average min max 00:17:42.962 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17343.20 67.75 7382.82 2038.73 36579.21 00:17:42.962 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17266.00 67.45 7414.99 3358.05 43902.27 00:17:42.962 ======================================================== 00:17:42.962 Total : 34609.21 135.19 7398.87 2038.73 43902.27 00:17:42.962 00:17:42.962 23:16:48 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:42.962 23:16:48 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7698e5fd-2f6c-4e83-8890-2b0e84c3ba06 00:17:42.962 23:16:48 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3c93162-3880-4e22-9197-ec7b53600724 00:17:43.221 23:16:48 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:43.222 23:16:48 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:43.222 23:16:48 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:43.222 23:16:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:43.222 23:16:48 -- nvmf/common.sh@116 -- # sync 00:17:43.222 23:16:48 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:43.222 23:16:48 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:43.222 23:16:48 -- nvmf/common.sh@119 -- # set +e 00:17:43.222 23:16:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:43.222 23:16:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:43.222 rmmod nvme_rdma 00:17:43.222 rmmod nvme_fabrics 00:17:43.222 23:16:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:43.222 23:16:48 -- nvmf/common.sh@123 -- # set -e 00:17:43.222 23:16:48 -- nvmf/common.sh@124 -- # return 0 00:17:43.222 23:16:48 -- nvmf/common.sh@477 -- # '[' -n 608888 ']' 00:17:43.222 23:16:48 -- nvmf/common.sh@478 -- # killprocess 608888 00:17:43.222 23:16:48 -- common/autotest_common.sh@926 -- # '[' -z 608888 ']' 00:17:43.222 23:16:48 -- common/autotest_common.sh@930 -- # kill -0 608888 00:17:43.222 23:16:48 -- common/autotest_common.sh@931 -- # uname 00:17:43.222 23:16:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:43.222 23:16:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 608888 00:17:43.222 23:16:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:43.222 23:16:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:43.222 23:16:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 608888' 00:17:43.222 killing process with pid 608888 00:17:43.222 23:16:48 -- common/autotest_common.sh@945 -- # kill 608888 00:17:43.222 23:16:48 -- common/autotest_common.sh@950 -- # wait 608888 00:17:43.481 23:16:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:43.481 23:16:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:43.481 00:17:43.481 real 0m21.700s 00:17:43.481 user 1m11.241s 00:17:43.481 sys 0m6.026s 00:17:43.481 23:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.481 23:16:49 -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 ************************************ 00:17:43.481 END TEST nvmf_lvol 00:17:43.481 ************************************ 00:17:43.741 23:16:49 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:43.741 23:16:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:43.741 23:16:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:43.741 23:16:49 -- common/autotest_common.sh@10 -- # set +x 00:17:43.741 ************************************ 00:17:43.741 START TEST nvmf_lvs_grow 00:17:43.741 ************************************ 00:17:43.741 23:16:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:43.741 * Looking for test storage... 00:17:43.741 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:43.741 23:16:49 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.741 23:16:49 -- nvmf/common.sh@7 -- # uname -s 00:17:43.741 23:16:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.741 23:16:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.741 23:16:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.741 23:16:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.741 23:16:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.741 23:16:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.741 23:16:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.741 23:16:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.741 23:16:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.741 23:16:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.741 23:16:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:43.741 23:16:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:43.741 23:16:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.741 23:16:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.741 23:16:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.741 23:16:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:43.741 23:16:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.741 23:16:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.741 23:16:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.741 23:16:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.741 23:16:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.741 23:16:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.741 23:16:49 -- paths/export.sh@5 -- # export PATH 00:17:43.741 23:16:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.741 23:16:49 -- nvmf/common.sh@46 -- # : 0 00:17:43.741 23:16:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:43.741 23:16:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:43.741 23:16:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:43.741 23:16:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.741 23:16:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.741 23:16:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:43.741 23:16:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:43.741 23:16:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:43.741 23:16:49 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:43.741 23:16:49 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.741 23:16:49 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:43.741 23:16:49 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:43.741 23:16:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.741 23:16:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:43.741 23:16:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:43.741 23:16:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:43.741 23:16:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.741 23:16:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.741 23:16:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.741 23:16:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:43.741 23:16:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:43.741 23:16:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:43.741 23:16:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.312 23:16:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:50.312 23:16:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:50.312 23:16:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:50.312 23:16:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:50.312 23:16:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:50.312 23:16:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:50.312 23:16:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:50.312 23:16:55 -- nvmf/common.sh@294 -- # net_devs=() 00:17:50.312 23:16:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:50.312 23:16:55 -- nvmf/common.sh@295 -- # e810=() 00:17:50.312 23:16:55 -- nvmf/common.sh@295 -- # local -ga e810 00:17:50.312 23:16:55 -- nvmf/common.sh@296 -- # x722=() 00:17:50.312 23:16:55 -- nvmf/common.sh@296 -- # local -ga x722 00:17:50.312 23:16:55 -- nvmf/common.sh@297 -- # mlx=() 00:17:50.312 23:16:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:50.312 23:16:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.312 23:16:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:50.312 23:16:55 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:50.312 23:16:55 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:50.312 23:16:55 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:50.312 23:16:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:50.312 23:16:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:50.312 23:16:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:50.312 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:50.312 23:16:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:50.312 23:16:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:50.312 23:16:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:50.312 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:50.312 23:16:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:50.312 23:16:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:50.312 23:16:55 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:50.312 23:16:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.312 23:16:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:50.312 23:16:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.312 23:16:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:50.312 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:50.312 23:16:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.312 23:16:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:50.312 23:16:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.312 23:16:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:50.312 23:16:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.312 23:16:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:50.312 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:50.312 23:16:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.312 23:16:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:50.312 23:16:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:50.312 23:16:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:50.312 23:16:55 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:50.312 23:16:55 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:50.312 23:16:55 -- nvmf/common.sh@57 -- # uname 00:17:50.312 23:16:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:50.312 23:16:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:50.312 23:16:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:50.312 23:16:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:50.312 23:16:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:50.312 23:16:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:50.312 23:16:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:50.312 23:16:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:50.312 23:16:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:50.312 23:16:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:50.312 23:16:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:50.313 23:16:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:50.313 23:16:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:50.313 23:16:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:50.313 23:16:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:50.572 23:16:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:50.572 23:16:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:50.572 23:16:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.572 23:16:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:50.572 23:16:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:50.572 23:16:56 -- nvmf/common.sh@104 -- # continue 2 00:17:50.572 23:16:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:50.572 23:16:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.572 23:16:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.573 23:16:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@104 -- # continue 2 00:17:50.573 23:16:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:50.573 23:16:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:50.573 23:16:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:50.573 23:16:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:50.573 23:16:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:50.573 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:50.573 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:50.573 altname enp217s0f0np0 00:17:50.573 altname ens818f0np0 00:17:50.573 inet 192.168.100.8/24 scope global mlx_0_0 00:17:50.573 valid_lft forever preferred_lft forever 00:17:50.573 23:16:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:50.573 23:16:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:50.573 23:16:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:50.573 23:16:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:50.573 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:50.573 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:50.573 altname enp217s0f1np1 00:17:50.573 altname ens818f1np1 00:17:50.573 inet 192.168.100.9/24 scope global mlx_0_1 00:17:50.573 valid_lft forever preferred_lft forever 00:17:50.573 23:16:56 -- nvmf/common.sh@410 -- # return 0 00:17:50.573 23:16:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:50.573 23:16:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:50.573 23:16:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:50.573 23:16:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:50.573 23:16:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:50.573 23:16:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:50.573 23:16:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:50.573 23:16:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:50.573 23:16:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:50.573 23:16:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:50.573 23:16:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.573 23:16:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:50.573 23:16:56 -- nvmf/common.sh@104 -- # continue 2 00:17:50.573 23:16:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:50.573 23:16:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.573 23:16:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.573 23:16:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:50.573 23:16:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@104 -- # continue 2 00:17:50.573 23:16:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:50.573 23:16:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:50.573 23:16:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:50.573 23:16:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:50.573 23:16:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:50.573 23:16:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:50.573 23:16:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:50.573 192.168.100.9' 00:17:50.573 23:16:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:50.573 192.168.100.9' 00:17:50.573 23:16:56 -- nvmf/common.sh@445 -- # head -n 1 00:17:50.573 23:16:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:50.573 23:16:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:50.573 192.168.100.9' 00:17:50.573 23:16:56 -- nvmf/common.sh@446 -- # tail -n +2 00:17:50.573 23:16:56 -- nvmf/common.sh@446 -- # head -n 1 00:17:50.573 23:16:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:50.573 23:16:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:50.573 23:16:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:50.573 23:16:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:50.573 23:16:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:50.573 23:16:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:50.573 23:16:56 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:50.573 23:16:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:50.573 23:16:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:50.573 23:16:56 -- common/autotest_common.sh@10 -- # set +x 00:17:50.573 23:16:56 -- nvmf/common.sh@469 -- # nvmfpid=614793 00:17:50.573 23:16:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:50.573 23:16:56 -- nvmf/common.sh@470 -- # waitforlisten 614793 00:17:50.573 23:16:56 -- common/autotest_common.sh@819 -- # '[' -z 614793 ']' 00:17:50.573 23:16:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.573 23:16:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.573 23:16:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.573 23:16:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.573 23:16:56 -- common/autotest_common.sh@10 -- # set +x 00:17:50.573 [2024-11-02 23:16:56.295867] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:50.573 [2024-11-02 23:16:56.295923] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.573 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.833 [2024-11-02 23:16:56.366004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.833 [2024-11-02 23:16:56.439221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:50.833 [2024-11-02 23:16:56.439331] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.833 [2024-11-02 23:16:56.439341] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.833 [2024-11-02 23:16:56.439349] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.833 [2024-11-02 23:16:56.439376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.401 23:16:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:51.401 23:16:57 -- common/autotest_common.sh@852 -- # return 0 00:17:51.401 23:16:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:51.401 23:16:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:51.401 23:16:57 -- common/autotest_common.sh@10 -- # set +x 00:17:51.401 23:16:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.401 23:16:57 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:51.660 [2024-11-02 23:16:57.329041] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c1cf30/0x1c21420) succeed. 00:17:51.660 [2024-11-02 23:16:57.338039] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c1e430/0x1c62ac0) succeed. 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:51.660 23:16:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:51.660 23:16:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.660 23:16:57 -- common/autotest_common.sh@10 -- # set +x 00:17:51.660 ************************************ 00:17:51.660 START TEST lvs_grow_clean 00:17:51.660 ************************************ 00:17:51.660 23:16:57 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:51.660 23:16:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.919 23:16:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.919 23:16:57 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:51.919 23:16:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:51.919 23:16:57 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:52.178 23:16:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:17:52.178 23:16:57 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:17:52.178 23:16:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:52.437 23:16:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:52.437 23:16:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:52.437 23:16:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba lvol 150 00:17:52.437 23:16:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=fc1ed04f-f91d-4080-93b1-982363dba4bf 00:17:52.437 23:16:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:52.437 23:16:58 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:52.697 [2024-11-02 23:16:58.294208] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:52.697 [2024-11-02 23:16:58.294262] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:52.697 true 00:17:52.697 23:16:58 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:17:52.697 23:16:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:52.956 23:16:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:52.956 23:16:58 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:52.956 23:16:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc1ed04f-f91d-4080-93b1-982363dba4bf 00:17:53.214 23:16:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:53.474 [2024-11-02 23:16:58.984464] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:53.474 23:16:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:53.474 23:16:59 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=615376 00:17:53.474 23:16:59 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.474 23:16:59 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:53.474 23:16:59 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 615376 /var/tmp/bdevperf.sock 00:17:53.474 23:16:59 -- common/autotest_common.sh@819 -- # '[' -z 615376 ']' 00:17:53.474 23:16:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.474 23:16:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:53.474 23:16:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.474 23:16:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:53.474 23:16:59 -- common/autotest_common.sh@10 -- # set +x 00:17:53.474 [2024-11-02 23:16:59.218924] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:53.474 [2024-11-02 23:16:59.218982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615376 ] 00:17:53.733 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.733 [2024-11-02 23:16:59.289468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.733 [2024-11-02 23:16:59.362998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.301 23:17:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:54.301 23:17:00 -- common/autotest_common.sh@852 -- # return 0 00:17:54.301 23:17:00 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:54.559 Nvme0n1 00:17:54.818 23:17:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:54.819 [ 00:17:54.819 { 00:17:54.819 "name": "Nvme0n1", 00:17:54.819 "aliases": [ 00:17:54.819 "fc1ed04f-f91d-4080-93b1-982363dba4bf" 00:17:54.819 ], 00:17:54.819 "product_name": "NVMe disk", 00:17:54.819 "block_size": 4096, 00:17:54.819 "num_blocks": 38912, 00:17:54.819 "uuid": "fc1ed04f-f91d-4080-93b1-982363dba4bf", 00:17:54.819 "assigned_rate_limits": { 00:17:54.819 "rw_ios_per_sec": 0, 00:17:54.819 "rw_mbytes_per_sec": 0, 00:17:54.819 "r_mbytes_per_sec": 0, 00:17:54.819 "w_mbytes_per_sec": 0 00:17:54.819 }, 00:17:54.819 "claimed": false, 00:17:54.819 "zoned": false, 00:17:54.819 "supported_io_types": { 00:17:54.819 "read": true, 00:17:54.819 "write": true, 00:17:54.819 "unmap": true, 00:17:54.819 "write_zeroes": true, 00:17:54.819 "flush": true, 00:17:54.819 "reset": true, 00:17:54.819 "compare": true, 00:17:54.819 "compare_and_write": true, 00:17:54.819 "abort": true, 00:17:54.819 "nvme_admin": true, 00:17:54.819 "nvme_io": true 00:17:54.819 }, 00:17:54.819 "memory_domains": [ 00:17:54.819 { 00:17:54.819 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:54.819 "dma_device_type": 0 00:17:54.819 } 00:17:54.819 ], 00:17:54.819 "driver_specific": { 00:17:54.819 "nvme": [ 00:17:54.819 { 00:17:54.819 "trid": { 00:17:54.819 "trtype": "RDMA", 00:17:54.819 "adrfam": "IPv4", 00:17:54.819 "traddr": "192.168.100.8", 00:17:54.819 "trsvcid": "4420", 00:17:54.819 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:54.819 }, 00:17:54.819 "ctrlr_data": { 00:17:54.819 "cntlid": 1, 00:17:54.819 "vendor_id": "0x8086", 00:17:54.819 "model_number": "SPDK bdev Controller", 00:17:54.819 "serial_number": "SPDK0", 00:17:54.819 "firmware_revision": "24.01.1", 00:17:54.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:54.819 "oacs": { 00:17:54.819 "security": 0, 00:17:54.819 "format": 0, 00:17:54.819 "firmware": 0, 00:17:54.819 "ns_manage": 0 00:17:54.819 }, 00:17:54.819 "multi_ctrlr": true, 00:17:54.819 "ana_reporting": false 00:17:54.819 }, 00:17:54.819 "vs": { 00:17:54.819 "nvme_version": "1.3" 00:17:54.819 }, 00:17:54.819 "ns_data": { 00:17:54.819 "id": 1, 00:17:54.819 "can_share": true 00:17:54.819 } 00:17:54.819 } 00:17:54.819 ], 00:17:54.819 "mp_policy": "active_passive" 00:17:54.819 } 00:17:54.819 } 00:17:54.819 ] 00:17:54.819 23:17:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=615648 00:17:54.819 23:17:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:54.819 23:17:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.077 Running I/O for 10 seconds... 00:17:56.015 Latency(us) 00:17:56.015 [2024-11-02T22:17:01.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.015 [2024-11-02T22:17:01.772Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.015 Nvme0n1 : 1.00 36002.00 140.63 0.00 0.00 0.00 0.00 0.00 00:17:56.015 [2024-11-02T22:17:01.772Z] =================================================================================================================== 00:17:56.015 [2024-11-02T22:17:01.772Z] Total : 36002.00 140.63 0.00 0.00 0.00 0.00 0.00 00:17:56.015 00:17:56.952 23:17:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:17:56.952 [2024-11-02T22:17:02.709Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.952 Nvme0n1 : 2.00 36527.50 142.69 0.00 0.00 0.00 0.00 0.00 00:17:56.952 [2024-11-02T22:17:02.709Z] =================================================================================================================== 00:17:56.952 [2024-11-02T22:17:02.709Z] Total : 36527.50 142.69 0.00 0.00 0.00 0.00 0.00 00:17:56.952 00:17:56.952 true 00:17:56.952 23:17:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:17:56.952 23:17:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:57.211 23:17:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:57.211 23:17:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:57.211 23:17:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 615648 00:17:58.144 [2024-11-02T22:17:03.901Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.144 Nvme0n1 : 3.00 36671.67 143.25 0.00 0.00 0.00 0.00 0.00 00:17:58.145 [2024-11-02T22:17:03.902Z] =================================================================================================================== 00:17:58.145 [2024-11-02T22:17:03.902Z] Total : 36671.67 143.25 0.00 0.00 0.00 0.00 0.00 00:17:58.145 00:17:59.080 [2024-11-02T22:17:04.837Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.080 Nvme0n1 : 4.00 36848.00 143.94 0.00 0.00 0.00 0.00 0.00 00:17:59.080 [2024-11-02T22:17:04.837Z] =================================================================================================================== 00:17:59.080 [2024-11-02T22:17:04.837Z] Total : 36848.00 143.94 0.00 0.00 0.00 0.00 0.00 00:17:59.081 00:18:00.018 [2024-11-02T22:17:05.775Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.018 Nvme0n1 : 5.00 36960.60 144.38 0.00 0.00 0.00 0.00 0.00 00:18:00.018 [2024-11-02T22:17:05.775Z] =================================================================================================================== 00:18:00.018 [2024-11-02T22:17:05.775Z] Total : 36960.60 144.38 0.00 0.00 0.00 0.00 0.00 00:18:00.018 00:18:00.951 [2024-11-02T22:17:06.708Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.951 Nvme0n1 : 6.00 36992.83 144.50 0.00 0.00 0.00 0.00 0.00 00:18:00.951 [2024-11-02T22:17:06.708Z] =================================================================================================================== 00:18:00.951 [2024-11-02T22:17:06.708Z] Total : 36992.83 144.50 0.00 0.00 0.00 0.00 0.00 00:18:00.951 00:18:01.888 [2024-11-02T22:17:07.645Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.888 Nvme0n1 : 7.00 37060.00 144.77 0.00 0.00 0.00 0.00 0.00 00:18:01.888 [2024-11-02T22:17:07.645Z] =================================================================================================================== 00:18:01.888 [2024-11-02T22:17:07.645Z] Total : 37060.00 144.77 0.00 0.00 0.00 0.00 0.00 00:18:01.888 00:18:03.263 [2024-11-02T22:17:09.020Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.263 Nvme0n1 : 8.00 37104.38 144.94 0.00 0.00 0.00 0.00 0.00 00:18:03.263 [2024-11-02T22:17:09.020Z] =================================================================================================================== 00:18:03.263 [2024-11-02T22:17:09.020Z] Total : 37104.38 144.94 0.00 0.00 0.00 0.00 0.00 00:18:03.263 00:18:04.199 [2024-11-02T22:17:09.956Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.199 Nvme0n1 : 9.00 37137.56 145.07 0.00 0.00 0.00 0.00 0.00 00:18:04.199 [2024-11-02T22:17:09.956Z] =================================================================================================================== 00:18:04.199 [2024-11-02T22:17:09.956Z] Total : 37137.56 145.07 0.00 0.00 0.00 0.00 0.00 00:18:04.199 00:18:05.136 [2024-11-02T22:17:10.893Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.136 Nvme0n1 : 10.00 37103.50 144.94 0.00 0.00 0.00 0.00 0.00 00:18:05.136 [2024-11-02T22:17:10.893Z] =================================================================================================================== 00:18:05.136 [2024-11-02T22:17:10.893Z] Total : 37103.50 144.94 0.00 0.00 0.00 0.00 0.00 00:18:05.136 00:18:05.136 00:18:05.136 Latency(us) 00:18:05.136 [2024-11-02T22:17:10.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.136 [2024-11-02T22:17:10.893Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.136 Nvme0n1 : 10.00 37102.15 144.93 0.00 0.00 3447.21 2280.65 14050.92 00:18:05.136 [2024-11-02T22:17:10.893Z] =================================================================================================================== 00:18:05.136 [2024-11-02T22:17:10.893Z] Total : 37102.15 144.93 0.00 0.00 3447.21 2280.65 14050.92 00:18:05.136 0 00:18:05.136 23:17:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 615376 00:18:05.136 23:17:10 -- common/autotest_common.sh@926 -- # '[' -z 615376 ']' 00:18:05.136 23:17:10 -- common/autotest_common.sh@930 -- # kill -0 615376 00:18:05.136 23:17:10 -- common/autotest_common.sh@931 -- # uname 00:18:05.136 23:17:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:05.136 23:17:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 615376 00:18:05.136 23:17:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:05.136 23:17:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:05.136 23:17:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 615376' 00:18:05.136 killing process with pid 615376 00:18:05.136 23:17:10 -- common/autotest_common.sh@945 -- # kill 615376 00:18:05.136 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.136 00:18:05.136 Latency(us) 00:18:05.136 [2024-11-02T22:17:10.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.136 [2024-11-02T22:17:10.893Z] =================================================================================================================== 00:18:05.136 [2024-11-02T22:17:10.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.136 23:17:10 -- common/autotest_common.sh@950 -- # wait 615376 00:18:05.395 23:17:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:05.395 23:17:11 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:18:05.395 23:17:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:05.653 23:17:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:05.653 23:17:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:05.653 23:17:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:05.912 [2024-11-02 23:17:11.472341] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:05.912 23:17:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:18:05.912 23:17:11 -- common/autotest_common.sh@640 -- # local es=0 00:18:05.912 23:17:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:18:05.912 23:17:11 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:05.912 23:17:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:05.912 23:17:11 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:05.912 23:17:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:05.912 23:17:11 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:05.912 23:17:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:05.912 23:17:11 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:05.912 23:17:11 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:05.912 23:17:11 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:18:06.172 request: 00:18:06.172 { 00:18:06.172 "uuid": "7060f08f-a75e-464d-a88b-f2bb49e0b7ba", 00:18:06.172 "method": "bdev_lvol_get_lvstores", 00:18:06.172 "req_id": 1 00:18:06.172 } 00:18:06.172 Got JSON-RPC error response 00:18:06.172 response: 00:18:06.172 { 00:18:06.172 "code": -19, 00:18:06.172 "message": "No such device" 00:18:06.172 } 00:18:06.172 23:17:11 -- common/autotest_common.sh@643 -- # es=1 00:18:06.172 23:17:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:06.172 23:17:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:06.172 23:17:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:06.172 23:17:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:06.172 aio_bdev 00:18:06.172 23:17:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev fc1ed04f-f91d-4080-93b1-982363dba4bf 00:18:06.172 23:17:11 -- common/autotest_common.sh@887 -- # local bdev_name=fc1ed04f-f91d-4080-93b1-982363dba4bf 00:18:06.172 23:17:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:06.172 23:17:11 -- common/autotest_common.sh@889 -- # local i 00:18:06.172 23:17:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:06.172 23:17:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:06.172 23:17:11 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:06.454 23:17:12 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fc1ed04f-f91d-4080-93b1-982363dba4bf -t 2000 00:18:06.713 [ 00:18:06.714 { 00:18:06.714 "name": "fc1ed04f-f91d-4080-93b1-982363dba4bf", 00:18:06.714 "aliases": [ 00:18:06.714 "lvs/lvol" 00:18:06.714 ], 00:18:06.714 "product_name": "Logical Volume", 00:18:06.714 "block_size": 4096, 00:18:06.714 "num_blocks": 38912, 00:18:06.714 "uuid": "fc1ed04f-f91d-4080-93b1-982363dba4bf", 00:18:06.714 "assigned_rate_limits": { 00:18:06.714 "rw_ios_per_sec": 0, 00:18:06.714 "rw_mbytes_per_sec": 0, 00:18:06.714 "r_mbytes_per_sec": 0, 00:18:06.714 "w_mbytes_per_sec": 0 00:18:06.714 }, 00:18:06.714 "claimed": false, 00:18:06.714 "zoned": false, 00:18:06.714 "supported_io_types": { 00:18:06.714 "read": true, 00:18:06.714 "write": true, 00:18:06.714 "unmap": true, 00:18:06.714 "write_zeroes": true, 00:18:06.714 "flush": false, 00:18:06.714 "reset": true, 00:18:06.714 "compare": false, 00:18:06.714 "compare_and_write": false, 00:18:06.714 "abort": false, 00:18:06.714 "nvme_admin": false, 00:18:06.714 "nvme_io": false 00:18:06.714 }, 00:18:06.714 "driver_specific": { 00:18:06.714 "lvol": { 00:18:06.714 "lvol_store_uuid": "7060f08f-a75e-464d-a88b-f2bb49e0b7ba", 00:18:06.714 "base_bdev": "aio_bdev", 00:18:06.714 "thin_provision": false, 00:18:06.714 "snapshot": false, 00:18:06.714 "clone": false, 00:18:06.714 "esnap_clone": false 00:18:06.714 } 00:18:06.714 } 00:18:06.714 } 00:18:06.714 ] 00:18:06.714 23:17:12 -- common/autotest_common.sh@895 -- # return 0 00:18:06.714 23:17:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:18:06.714 23:17:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:06.714 23:17:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:06.714 23:17:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:18:06.714 23:17:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:06.973 23:17:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:06.973 23:17:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc1ed04f-f91d-4080-93b1-982363dba4bf 00:18:07.232 23:17:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7060f08f-a75e-464d-a88b-f2bb49e0b7ba 00:18:07.232 23:17:12 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:07.491 23:17:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:07.491 00:18:07.491 real 0m15.741s 00:18:07.491 user 0m15.760s 00:18:07.491 sys 0m1.168s 00:18:07.491 23:17:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.491 23:17:13 -- common/autotest_common.sh@10 -- # set +x 00:18:07.491 ************************************ 00:18:07.491 END TEST lvs_grow_clean 00:18:07.491 ************************************ 00:18:07.491 23:17:13 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:07.491 23:17:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:07.492 23:17:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:07.492 23:17:13 -- common/autotest_common.sh@10 -- # set +x 00:18:07.492 ************************************ 00:18:07.492 START TEST lvs_grow_dirty 00:18:07.492 ************************************ 00:18:07.492 23:17:13 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:07.492 23:17:13 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:07.751 23:17:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:07.751 23:17:13 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:08.010 23:17:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8c715927-be9c-4d50-8385-cab779dc1906 00:18:08.010 23:17:13 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:08.010 23:17:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:08.270 23:17:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:08.270 23:17:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:08.270 23:17:13 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8c715927-be9c-4d50-8385-cab779dc1906 lvol 150 00:18:08.270 23:17:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=10aa14a7-27f5-4248-a82c-819fde5093c4 00:18:08.270 23:17:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:08.270 23:17:13 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:08.529 [2024-11-02 23:17:14.140789] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:08.529 [2024-11-02 23:17:14.140842] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:08.529 true 00:18:08.529 23:17:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:08.529 23:17:14 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:08.788 23:17:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:08.788 23:17:14 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:08.788 23:17:14 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 10aa14a7-27f5-4248-a82c-819fde5093c4 00:18:09.047 23:17:14 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:09.305 23:17:14 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:09.305 23:17:14 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=618146 00:18:09.305 23:17:14 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:09.305 23:17:14 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:09.305 23:17:14 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 618146 /var/tmp/bdevperf.sock 00:18:09.305 23:17:14 -- common/autotest_common.sh@819 -- # '[' -z 618146 ']' 00:18:09.305 23:17:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.305 23:17:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:09.305 23:17:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.305 23:17:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:09.305 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:18:09.305 [2024-11-02 23:17:15.034513] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:09.306 [2024-11-02 23:17:15.034567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618146 ] 00:18:09.565 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.565 [2024-11-02 23:17:15.103741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.565 [2024-11-02 23:17:15.176867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.133 23:17:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:10.133 23:17:15 -- common/autotest_common.sh@852 -- # return 0 00:18:10.133 23:17:15 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:10.392 Nvme0n1 00:18:10.392 23:17:16 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:10.651 [ 00:18:10.651 { 00:18:10.651 "name": "Nvme0n1", 00:18:10.651 "aliases": [ 00:18:10.651 "10aa14a7-27f5-4248-a82c-819fde5093c4" 00:18:10.651 ], 00:18:10.651 "product_name": "NVMe disk", 00:18:10.651 "block_size": 4096, 00:18:10.651 "num_blocks": 38912, 00:18:10.651 "uuid": "10aa14a7-27f5-4248-a82c-819fde5093c4", 00:18:10.651 "assigned_rate_limits": { 00:18:10.651 "rw_ios_per_sec": 0, 00:18:10.651 "rw_mbytes_per_sec": 0, 00:18:10.651 "r_mbytes_per_sec": 0, 00:18:10.651 "w_mbytes_per_sec": 0 00:18:10.651 }, 00:18:10.651 "claimed": false, 00:18:10.651 "zoned": false, 00:18:10.651 "supported_io_types": { 00:18:10.651 "read": true, 00:18:10.651 "write": true, 00:18:10.651 "unmap": true, 00:18:10.651 "write_zeroes": true, 00:18:10.651 "flush": true, 00:18:10.651 "reset": true, 00:18:10.651 "compare": true, 00:18:10.651 "compare_and_write": true, 00:18:10.651 "abort": true, 00:18:10.651 "nvme_admin": true, 00:18:10.651 "nvme_io": true 00:18:10.651 }, 00:18:10.651 "memory_domains": [ 00:18:10.651 { 00:18:10.651 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:10.651 "dma_device_type": 0 00:18:10.651 } 00:18:10.651 ], 00:18:10.651 "driver_specific": { 00:18:10.651 "nvme": [ 00:18:10.651 { 00:18:10.651 "trid": { 00:18:10.651 "trtype": "RDMA", 00:18:10.651 "adrfam": "IPv4", 00:18:10.651 "traddr": "192.168.100.8", 00:18:10.652 "trsvcid": "4420", 00:18:10.652 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:10.652 }, 00:18:10.652 "ctrlr_data": { 00:18:10.652 "cntlid": 1, 00:18:10.652 "vendor_id": "0x8086", 00:18:10.652 "model_number": "SPDK bdev Controller", 00:18:10.652 "serial_number": "SPDK0", 00:18:10.652 "firmware_revision": "24.01.1", 00:18:10.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:10.652 "oacs": { 00:18:10.652 "security": 0, 00:18:10.652 "format": 0, 00:18:10.652 "firmware": 0, 00:18:10.652 "ns_manage": 0 00:18:10.652 }, 00:18:10.652 "multi_ctrlr": true, 00:18:10.652 "ana_reporting": false 00:18:10.652 }, 00:18:10.652 "vs": { 00:18:10.652 "nvme_version": "1.3" 00:18:10.652 }, 00:18:10.652 "ns_data": { 00:18:10.652 "id": 1, 00:18:10.652 "can_share": true 00:18:10.652 } 00:18:10.652 } 00:18:10.652 ], 00:18:10.652 "mp_policy": "active_passive" 00:18:10.652 } 00:18:10.652 } 00:18:10.652 ] 00:18:10.652 23:17:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=618412 00:18:10.652 23:17:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:10.652 23:17:16 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:10.652 Running I/O for 10 seconds... 00:18:12.031 Latency(us) 00:18:12.031 [2024-11-02T22:17:17.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.031 [2024-11-02T22:17:17.788Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.031 Nvme0n1 : 1.00 36316.00 141.86 0.00 0.00 0.00 0.00 0.00 00:18:12.031 [2024-11-02T22:17:17.788Z] =================================================================================================================== 00:18:12.031 [2024-11-02T22:17:17.788Z] Total : 36316.00 141.86 0.00 0.00 0.00 0.00 0.00 00:18:12.031 00:18:12.599 23:17:18 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:12.858 [2024-11-02T22:17:18.615Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.858 Nvme0n1 : 2.00 36687.50 143.31 0.00 0.00 0.00 0.00 0.00 00:18:12.858 [2024-11-02T22:17:18.615Z] =================================================================================================================== 00:18:12.858 [2024-11-02T22:17:18.615Z] Total : 36687.50 143.31 0.00 0.00 0.00 0.00 0.00 00:18:12.858 00:18:12.858 true 00:18:12.858 23:17:18 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:12.858 23:17:18 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:13.118 23:17:18 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:13.118 23:17:18 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:13.118 23:17:18 -- target/nvmf_lvs_grow.sh@65 -- # wait 618412 00:18:13.686 [2024-11-02T22:17:19.443Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.686 Nvme0n1 : 3.00 36789.33 143.71 0.00 0.00 0.00 0.00 0.00 00:18:13.686 [2024-11-02T22:17:19.443Z] =================================================================================================================== 00:18:13.686 [2024-11-02T22:17:19.443Z] Total : 36789.33 143.71 0.00 0.00 0.00 0.00 0.00 00:18:13.686 00:18:14.623 [2024-11-02T22:17:20.381Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.624 Nvme0n1 : 4.00 36832.50 143.88 0.00 0.00 0.00 0.00 0.00 00:18:14.624 [2024-11-02T22:17:20.381Z] =================================================================================================================== 00:18:14.624 [2024-11-02T22:17:20.381Z] Total : 36832.50 143.88 0.00 0.00 0.00 0.00 0.00 00:18:14.624 00:18:16.001 [2024-11-02T22:17:21.758Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.001 Nvme0n1 : 5.00 36869.40 144.02 0.00 0.00 0.00 0.00 0.00 00:18:16.001 [2024-11-02T22:17:21.758Z] =================================================================================================================== 00:18:16.001 [2024-11-02T22:17:21.758Z] Total : 36869.40 144.02 0.00 0.00 0.00 0.00 0.00 00:18:16.001 00:18:16.938 [2024-11-02T22:17:22.695Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.938 Nvme0n1 : 6.00 36959.17 144.37 0.00 0.00 0.00 0.00 0.00 00:18:16.938 [2024-11-02T22:17:22.695Z] =================================================================================================================== 00:18:16.938 [2024-11-02T22:17:22.695Z] Total : 36959.17 144.37 0.00 0.00 0.00 0.00 0.00 00:18:16.938 00:18:17.875 [2024-11-02T22:17:23.632Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.875 Nvme0n1 : 7.00 37029.14 144.65 0.00 0.00 0.00 0.00 0.00 00:18:17.875 [2024-11-02T22:17:23.633Z] =================================================================================================================== 00:18:17.876 [2024-11-02T22:17:23.633Z] Total : 37029.14 144.65 0.00 0.00 0.00 0.00 0.00 00:18:17.876 00:18:18.812 [2024-11-02T22:17:24.569Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.812 Nvme0n1 : 8.00 37080.38 144.85 0.00 0.00 0.00 0.00 0.00 00:18:18.812 [2024-11-02T22:17:24.569Z] =================================================================================================================== 00:18:18.812 [2024-11-02T22:17:24.569Z] Total : 37080.38 144.85 0.00 0.00 0.00 0.00 0.00 00:18:18.812 00:18:19.751 [2024-11-02T22:17:25.508Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.751 Nvme0n1 : 9.00 37123.00 145.01 0.00 0.00 0.00 0.00 0.00 00:18:19.751 [2024-11-02T22:17:25.508Z] =================================================================================================================== 00:18:19.751 [2024-11-02T22:17:25.508Z] Total : 37123.00 145.01 0.00 0.00 0.00 0.00 0.00 00:18:19.751 00:18:20.688 [2024-11-02T22:17:26.445Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.688 Nvme0n1 : 10.00 37157.90 145.15 0.00 0.00 0.00 0.00 0.00 00:18:20.688 [2024-11-02T22:17:26.445Z] =================================================================================================================== 00:18:20.688 [2024-11-02T22:17:26.445Z] Total : 37157.90 145.15 0.00 0.00 0.00 0.00 0.00 00:18:20.688 00:18:20.688 00:18:20.688 Latency(us) 00:18:20.688 [2024-11-02T22:17:26.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.688 [2024-11-02T22:17:26.445Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.689 Nvme0n1 : 10.00 37157.14 145.15 0.00 0.00 3442.18 2070.94 9751.76 00:18:20.689 [2024-11-02T22:17:26.446Z] =================================================================================================================== 00:18:20.689 [2024-11-02T22:17:26.446Z] Total : 37157.14 145.15 0.00 0.00 3442.18 2070.94 9751.76 00:18:20.689 0 00:18:20.689 23:17:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 618146 00:18:20.689 23:17:26 -- common/autotest_common.sh@926 -- # '[' -z 618146 ']' 00:18:20.689 23:17:26 -- common/autotest_common.sh@930 -- # kill -0 618146 00:18:20.689 23:17:26 -- common/autotest_common.sh@931 -- # uname 00:18:20.689 23:17:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.689 23:17:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 618146 00:18:20.948 23:17:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:20.948 23:17:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:20.948 23:17:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 618146' 00:18:20.948 killing process with pid 618146 00:18:20.948 23:17:26 -- common/autotest_common.sh@945 -- # kill 618146 00:18:20.948 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.948 00:18:20.948 Latency(us) 00:18:20.948 [2024-11-02T22:17:26.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.948 [2024-11-02T22:17:26.705Z] =================================================================================================================== 00:18:20.948 [2024-11-02T22:17:26.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.948 23:17:26 -- common/autotest_common.sh@950 -- # wait 618146 00:18:20.948 23:17:26 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:21.207 23:17:26 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:21.207 23:17:26 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:21.467 23:17:27 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:21.467 23:17:27 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:21.467 23:17:27 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 614793 00:18:21.467 23:17:27 -- target/nvmf_lvs_grow.sh@74 -- # wait 614793 00:18:21.467 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 614793 Killed "${NVMF_APP[@]}" "$@" 00:18:21.467 23:17:27 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:21.467 23:17:27 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:21.467 23:17:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:21.467 23:17:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:21.467 23:17:27 -- common/autotest_common.sh@10 -- # set +x 00:18:21.467 23:17:27 -- nvmf/common.sh@469 -- # nvmfpid=620306 00:18:21.467 23:17:27 -- nvmf/common.sh@470 -- # waitforlisten 620306 00:18:21.467 23:17:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:21.467 23:17:27 -- common/autotest_common.sh@819 -- # '[' -z 620306 ']' 00:18:21.467 23:17:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.467 23:17:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:21.467 23:17:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.467 23:17:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:21.467 23:17:27 -- common/autotest_common.sh@10 -- # set +x 00:18:21.467 [2024-11-02 23:17:27.199022] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:21.467 [2024-11-02 23:17:27.199076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.726 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.726 [2024-11-02 23:17:27.270787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.726 [2024-11-02 23:17:27.343143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:21.726 [2024-11-02 23:17:27.343248] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.726 [2024-11-02 23:17:27.343258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.726 [2024-11-02 23:17:27.343267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.726 [2024-11-02 23:17:27.343290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.295 23:17:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:22.295 23:17:28 -- common/autotest_common.sh@852 -- # return 0 00:18:22.295 23:17:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:22.295 23:17:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:22.295 23:17:28 -- common/autotest_common.sh@10 -- # set +x 00:18:22.295 23:17:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.554 23:17:28 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:22.554 [2024-11-02 23:17:28.215556] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:22.554 [2024-11-02 23:17:28.215654] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:22.554 [2024-11-02 23:17:28.215681] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:22.554 23:17:28 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:22.554 23:17:28 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 10aa14a7-27f5-4248-a82c-819fde5093c4 00:18:22.554 23:17:28 -- common/autotest_common.sh@887 -- # local bdev_name=10aa14a7-27f5-4248-a82c-819fde5093c4 00:18:22.554 23:17:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:22.554 23:17:28 -- common/autotest_common.sh@889 -- # local i 00:18:22.554 23:17:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:22.554 23:17:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:22.554 23:17:28 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:22.814 23:17:28 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 10aa14a7-27f5-4248-a82c-819fde5093c4 -t 2000 00:18:22.814 [ 00:18:22.814 { 00:18:22.814 "name": "10aa14a7-27f5-4248-a82c-819fde5093c4", 00:18:22.814 "aliases": [ 00:18:22.814 "lvs/lvol" 00:18:22.814 ], 00:18:22.814 "product_name": "Logical Volume", 00:18:22.814 "block_size": 4096, 00:18:22.814 "num_blocks": 38912, 00:18:22.814 "uuid": "10aa14a7-27f5-4248-a82c-819fde5093c4", 00:18:22.814 "assigned_rate_limits": { 00:18:22.814 "rw_ios_per_sec": 0, 00:18:22.814 "rw_mbytes_per_sec": 0, 00:18:22.814 "r_mbytes_per_sec": 0, 00:18:22.814 "w_mbytes_per_sec": 0 00:18:22.814 }, 00:18:22.814 "claimed": false, 00:18:22.814 "zoned": false, 00:18:22.814 "supported_io_types": { 00:18:22.814 "read": true, 00:18:22.814 "write": true, 00:18:22.814 "unmap": true, 00:18:22.814 "write_zeroes": true, 00:18:22.814 "flush": false, 00:18:22.814 "reset": true, 00:18:22.814 "compare": false, 00:18:22.814 "compare_and_write": false, 00:18:22.814 "abort": false, 00:18:22.814 "nvme_admin": false, 00:18:22.814 "nvme_io": false 00:18:22.814 }, 00:18:22.814 "driver_specific": { 00:18:22.814 "lvol": { 00:18:22.814 "lvol_store_uuid": "8c715927-be9c-4d50-8385-cab779dc1906", 00:18:22.814 "base_bdev": "aio_bdev", 00:18:22.814 "thin_provision": false, 00:18:22.814 "snapshot": false, 00:18:22.814 "clone": false, 00:18:22.814 "esnap_clone": false 00:18:22.814 } 00:18:22.814 } 00:18:22.814 } 00:18:22.814 ] 00:18:23.074 23:17:28 -- common/autotest_common.sh@895 -- # return 0 00:18:23.074 23:17:28 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:23.074 23:17:28 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:23.074 23:17:28 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:23.074 23:17:28 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:23.074 23:17:28 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:23.333 23:17:28 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:23.333 23:17:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:23.592 [2024-11-02 23:17:29.095951] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:23.592 23:17:29 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:23.592 23:17:29 -- common/autotest_common.sh@640 -- # local es=0 00:18:23.592 23:17:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:23.592 23:17:29 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:23.592 23:17:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:23.592 23:17:29 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:23.592 23:17:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:23.592 23:17:29 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:23.592 23:17:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:23.592 23:17:29 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:23.592 23:17:29 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:23.592 23:17:29 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:23.592 request: 00:18:23.592 { 00:18:23.592 "uuid": "8c715927-be9c-4d50-8385-cab779dc1906", 00:18:23.592 "method": "bdev_lvol_get_lvstores", 00:18:23.592 "req_id": 1 00:18:23.592 } 00:18:23.593 Got JSON-RPC error response 00:18:23.593 response: 00:18:23.593 { 00:18:23.593 "code": -19, 00:18:23.593 "message": "No such device" 00:18:23.593 } 00:18:23.593 23:17:29 -- common/autotest_common.sh@643 -- # es=1 00:18:23.593 23:17:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:23.593 23:17:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:23.593 23:17:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:23.593 23:17:29 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:23.852 aio_bdev 00:18:23.852 23:17:29 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 10aa14a7-27f5-4248-a82c-819fde5093c4 00:18:23.852 23:17:29 -- common/autotest_common.sh@887 -- # local bdev_name=10aa14a7-27f5-4248-a82c-819fde5093c4 00:18:23.852 23:17:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:23.852 23:17:29 -- common/autotest_common.sh@889 -- # local i 00:18:23.852 23:17:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:23.852 23:17:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:23.852 23:17:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:24.110 23:17:29 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 10aa14a7-27f5-4248-a82c-819fde5093c4 -t 2000 00:18:24.110 [ 00:18:24.110 { 00:18:24.110 "name": "10aa14a7-27f5-4248-a82c-819fde5093c4", 00:18:24.110 "aliases": [ 00:18:24.110 "lvs/lvol" 00:18:24.110 ], 00:18:24.110 "product_name": "Logical Volume", 00:18:24.110 "block_size": 4096, 00:18:24.110 "num_blocks": 38912, 00:18:24.110 "uuid": "10aa14a7-27f5-4248-a82c-819fde5093c4", 00:18:24.110 "assigned_rate_limits": { 00:18:24.110 "rw_ios_per_sec": 0, 00:18:24.110 "rw_mbytes_per_sec": 0, 00:18:24.110 "r_mbytes_per_sec": 0, 00:18:24.110 "w_mbytes_per_sec": 0 00:18:24.110 }, 00:18:24.110 "claimed": false, 00:18:24.110 "zoned": false, 00:18:24.110 "supported_io_types": { 00:18:24.110 "read": true, 00:18:24.110 "write": true, 00:18:24.110 "unmap": true, 00:18:24.110 "write_zeroes": true, 00:18:24.110 "flush": false, 00:18:24.110 "reset": true, 00:18:24.110 "compare": false, 00:18:24.110 "compare_and_write": false, 00:18:24.110 "abort": false, 00:18:24.110 "nvme_admin": false, 00:18:24.110 "nvme_io": false 00:18:24.110 }, 00:18:24.110 "driver_specific": { 00:18:24.110 "lvol": { 00:18:24.110 "lvol_store_uuid": "8c715927-be9c-4d50-8385-cab779dc1906", 00:18:24.110 "base_bdev": "aio_bdev", 00:18:24.110 "thin_provision": false, 00:18:24.110 "snapshot": false, 00:18:24.110 "clone": false, 00:18:24.110 "esnap_clone": false 00:18:24.110 } 00:18:24.110 } 00:18:24.110 } 00:18:24.110 ] 00:18:24.110 23:17:29 -- common/autotest_common.sh@895 -- # return 0 00:18:24.110 23:17:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:24.110 23:17:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:24.369 23:17:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:24.369 23:17:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:24.369 23:17:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:24.628 23:17:30 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:24.628 23:17:30 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 10aa14a7-27f5-4248-a82c-819fde5093c4 00:18:24.628 23:17:30 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c715927-be9c-4d50-8385-cab779dc1906 00:18:24.888 23:17:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:25.147 23:17:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:25.147 00:18:25.147 real 0m17.536s 00:18:25.147 user 0m45.369s 00:18:25.147 sys 0m3.326s 00:18:25.147 23:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.147 23:17:30 -- common/autotest_common.sh@10 -- # set +x 00:18:25.147 ************************************ 00:18:25.147 END TEST lvs_grow_dirty 00:18:25.147 ************************************ 00:18:25.147 23:17:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:25.147 23:17:30 -- common/autotest_common.sh@796 -- # type=--id 00:18:25.147 23:17:30 -- common/autotest_common.sh@797 -- # id=0 00:18:25.147 23:17:30 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:25.147 23:17:30 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:25.147 23:17:30 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:25.147 23:17:30 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:25.147 23:17:30 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:25.147 23:17:30 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:25.147 nvmf_trace.0 00:18:25.147 23:17:30 -- common/autotest_common.sh@811 -- # return 0 00:18:25.147 23:17:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:25.147 23:17:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:25.147 23:17:30 -- nvmf/common.sh@116 -- # sync 00:18:25.147 23:17:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:25.147 23:17:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:25.147 23:17:30 -- nvmf/common.sh@119 -- # set +e 00:18:25.147 23:17:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:25.147 23:17:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:25.147 rmmod nvme_rdma 00:18:25.147 rmmod nvme_fabrics 00:18:25.147 23:17:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:25.147 23:17:30 -- nvmf/common.sh@123 -- # set -e 00:18:25.147 23:17:30 -- nvmf/common.sh@124 -- # return 0 00:18:25.147 23:17:30 -- nvmf/common.sh@477 -- # '[' -n 620306 ']' 00:18:25.147 23:17:30 -- nvmf/common.sh@478 -- # killprocess 620306 00:18:25.147 23:17:30 -- common/autotest_common.sh@926 -- # '[' -z 620306 ']' 00:18:25.147 23:17:30 -- common/autotest_common.sh@930 -- # kill -0 620306 00:18:25.147 23:17:30 -- common/autotest_common.sh@931 -- # uname 00:18:25.406 23:17:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:25.406 23:17:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 620306 00:18:25.406 23:17:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:25.406 23:17:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:25.406 23:17:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 620306' 00:18:25.406 killing process with pid 620306 00:18:25.406 23:17:30 -- common/autotest_common.sh@945 -- # kill 620306 00:18:25.406 23:17:30 -- common/autotest_common.sh@950 -- # wait 620306 00:18:25.406 23:17:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:25.406 23:17:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:25.406 00:18:25.406 real 0m41.871s 00:18:25.406 user 1m7.434s 00:18:25.406 sys 0m10.152s 00:18:25.406 23:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.406 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:18:25.406 ************************************ 00:18:25.406 END TEST nvmf_lvs_grow 00:18:25.406 ************************************ 00:18:25.666 23:17:31 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:25.666 23:17:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:25.666 23:17:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.666 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:18:25.666 ************************************ 00:18:25.666 START TEST nvmf_bdev_io_wait 00:18:25.666 ************************************ 00:18:25.666 23:17:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:25.666 * Looking for test storage... 00:18:25.666 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:25.666 23:17:31 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.666 23:17:31 -- nvmf/common.sh@7 -- # uname -s 00:18:25.666 23:17:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.666 23:17:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.666 23:17:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.666 23:17:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.666 23:17:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.666 23:17:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.666 23:17:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.666 23:17:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.666 23:17:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.666 23:17:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.666 23:17:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:25.666 23:17:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:25.666 23:17:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.666 23:17:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.666 23:17:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.666 23:17:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:25.666 23:17:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.666 23:17:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.666 23:17:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.666 23:17:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.666 23:17:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.667 23:17:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.667 23:17:31 -- paths/export.sh@5 -- # export PATH 00:18:25.667 23:17:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.667 23:17:31 -- nvmf/common.sh@46 -- # : 0 00:18:25.667 23:17:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.667 23:17:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.667 23:17:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.667 23:17:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.667 23:17:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.667 23:17:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.667 23:17:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.667 23:17:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.667 23:17:31 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.667 23:17:31 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.667 23:17:31 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:25.667 23:17:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:25.667 23:17:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.667 23:17:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.667 23:17:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.667 23:17:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.667 23:17:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.667 23:17:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.667 23:17:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.667 23:17:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.667 23:17:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.667 23:17:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.667 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.241 23:17:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:32.241 23:17:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:32.241 23:17:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:32.241 23:17:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:32.241 23:17:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:32.241 23:17:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:32.241 23:17:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:32.241 23:17:37 -- nvmf/common.sh@294 -- # net_devs=() 00:18:32.241 23:17:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:32.241 23:17:37 -- nvmf/common.sh@295 -- # e810=() 00:18:32.241 23:17:37 -- nvmf/common.sh@295 -- # local -ga e810 00:18:32.241 23:17:37 -- nvmf/common.sh@296 -- # x722=() 00:18:32.241 23:17:37 -- nvmf/common.sh@296 -- # local -ga x722 00:18:32.241 23:17:37 -- nvmf/common.sh@297 -- # mlx=() 00:18:32.241 23:17:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:32.241 23:17:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.241 23:17:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:32.241 23:17:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:32.241 23:17:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:32.241 23:17:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:32.241 23:17:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:32.241 23:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:32.241 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:32.241 23:17:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:32.241 23:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:32.241 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:32.241 23:17:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:32.241 23:17:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:32.241 23:17:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.241 23:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.241 23:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.241 23:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:32.241 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:32.241 23:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.241 23:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.241 23:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.241 23:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.241 23:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:32.241 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:32.241 23:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.241 23:17:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:32.241 23:17:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:32.241 23:17:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:32.241 23:17:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:32.241 23:17:37 -- nvmf/common.sh@57 -- # uname 00:18:32.241 23:17:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:32.241 23:17:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:32.241 23:17:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:32.241 23:17:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:32.241 23:17:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:32.241 23:17:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:32.241 23:17:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:32.241 23:17:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:32.241 23:17:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:32.241 23:17:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:32.241 23:17:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:32.241 23:17:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:32.241 23:17:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:32.241 23:17:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:32.241 23:17:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:32.241 23:17:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:32.241 23:17:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:32.241 23:17:37 -- nvmf/common.sh@104 -- # continue 2 00:18:32.241 23:17:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:32.241 23:17:37 -- nvmf/common.sh@104 -- # continue 2 00:18:32.241 23:17:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:32.241 23:17:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:32.241 23:17:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:32.241 23:17:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:32.241 23:17:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:32.241 23:17:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:32.241 23:17:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:32.241 23:17:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:32.241 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:32.241 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:32.241 altname enp217s0f0np0 00:18:32.241 altname ens818f0np0 00:18:32.241 inet 192.168.100.8/24 scope global mlx_0_0 00:18:32.241 valid_lft forever preferred_lft forever 00:18:32.241 23:17:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:32.241 23:17:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:32.241 23:17:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:32.241 23:17:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:32.241 23:17:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:32.241 23:17:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:32.241 23:17:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:32.241 23:17:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:32.241 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:32.241 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:32.241 altname enp217s0f1np1 00:18:32.241 altname ens818f1np1 00:18:32.241 inet 192.168.100.9/24 scope global mlx_0_1 00:18:32.241 valid_lft forever preferred_lft forever 00:18:32.241 23:17:37 -- nvmf/common.sh@410 -- # return 0 00:18:32.241 23:17:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.241 23:17:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:32.241 23:17:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:32.241 23:17:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:32.241 23:17:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:32.241 23:17:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:32.241 23:17:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:32.241 23:17:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:32.241 23:17:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:32.241 23:17:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.241 23:17:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:32.241 23:17:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:32.241 23:17:37 -- nvmf/common.sh@104 -- # continue 2 00:18:32.242 23:17:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:32.242 23:17:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.242 23:17:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:32.242 23:17:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.242 23:17:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:32.242 23:17:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:32.242 23:17:37 -- nvmf/common.sh@104 -- # continue 2 00:18:32.242 23:17:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:32.242 23:17:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:32.242 23:17:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:32.242 23:17:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:32.242 23:17:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:32.242 23:17:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:32.242 23:17:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:32.242 23:17:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:32.242 23:17:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:32.242 23:17:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:32.242 23:17:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:32.242 23:17:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:32.242 23:17:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:32.242 192.168.100.9' 00:18:32.242 23:17:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:32.242 192.168.100.9' 00:18:32.242 23:17:37 -- nvmf/common.sh@445 -- # head -n 1 00:18:32.242 23:17:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:32.242 23:17:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:32.242 192.168.100.9' 00:18:32.242 23:17:37 -- nvmf/common.sh@446 -- # tail -n +2 00:18:32.242 23:17:37 -- nvmf/common.sh@446 -- # head -n 1 00:18:32.242 23:17:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:32.242 23:17:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:32.242 23:17:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:32.242 23:17:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:32.242 23:17:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:32.242 23:17:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:32.242 23:17:37 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:32.242 23:17:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.242 23:17:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:32.242 23:17:37 -- common/autotest_common.sh@10 -- # set +x 00:18:32.242 23:17:37 -- nvmf/common.sh@469 -- # nvmfpid=624257 00:18:32.242 23:17:37 -- nvmf/common.sh@470 -- # waitforlisten 624257 00:18:32.242 23:17:37 -- common/autotest_common.sh@819 -- # '[' -z 624257 ']' 00:18:32.242 23:17:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.242 23:17:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.242 23:17:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.242 23:17:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.242 23:17:37 -- common/autotest_common.sh@10 -- # set +x 00:18:32.242 23:17:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:32.242 [2024-11-02 23:17:37.970992] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:32.242 [2024-11-02 23:17:37.971042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.502 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.502 [2024-11-02 23:17:38.040531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.502 [2024-11-02 23:17:38.115553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:32.502 [2024-11-02 23:17:38.115660] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.502 [2024-11-02 23:17:38.115671] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.502 [2024-11-02 23:17:38.115680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.502 [2024-11-02 23:17:38.115725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.502 [2024-11-02 23:17:38.115744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.502 [2024-11-02 23:17:38.115849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.502 [2024-11-02 23:17:38.115851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.070 23:17:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.070 23:17:38 -- common/autotest_common.sh@852 -- # return 0 00:18:33.070 23:17:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:33.070 23:17:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:33.070 23:17:38 -- common/autotest_common.sh@10 -- # set +x 00:18:33.329 23:17:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.329 23:17:38 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:33.329 23:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.329 23:17:38 -- common/autotest_common.sh@10 -- # set +x 00:18:33.329 23:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.329 23:17:38 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:33.329 23:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.329 23:17:38 -- common/autotest_common.sh@10 -- # set +x 00:18:33.329 23:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.329 23:17:38 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:33.329 23:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.329 23:17:38 -- common/autotest_common.sh@10 -- # set +x 00:18:33.329 [2024-11-02 23:17:38.930498] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8660c0/0x86a5b0) succeed. 00:18:33.329 [2024-11-02 23:17:38.939347] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8676b0/0x8abc50) succeed. 00:18:33.329 23:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.329 23:17:39 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:33.329 23:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.329 23:17:39 -- common/autotest_common.sh@10 -- # set +x 00:18:33.588 Malloc0 00:18:33.588 23:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:33.588 23:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.588 23:17:39 -- common/autotest_common.sh@10 -- # set +x 00:18:33.588 23:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.588 23:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.588 23:17:39 -- common/autotest_common.sh@10 -- # set +x 00:18:33.588 23:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:33.588 23:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.588 23:17:39 -- common/autotest_common.sh@10 -- # set +x 00:18:33.588 [2024-11-02 23:17:39.124339] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:33.588 23:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=624397 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@30 -- # READ_PID=624399 00:18:33.588 23:17:39 -- nvmf/common.sh@520 -- # config=() 00:18:33.588 23:17:39 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.588 23:17:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.588 23:17:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.588 { 00:18:33.588 "params": { 00:18:33.588 "name": "Nvme$subsystem", 00:18:33.588 "trtype": "$TEST_TRANSPORT", 00:18:33.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.588 "adrfam": "ipv4", 00:18:33.588 "trsvcid": "$NVMF_PORT", 00:18:33.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.588 "hdgst": ${hdgst:-false}, 00:18:33.588 "ddgst": ${ddgst:-false} 00:18:33.588 }, 00:18:33.588 "method": "bdev_nvme_attach_controller" 00:18:33.588 } 00:18:33.588 EOF 00:18:33.588 )") 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=624401 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:33.588 23:17:39 -- nvmf/common.sh@520 -- # config=() 00:18:33.588 23:17:39 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.588 23:17:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.588 23:17:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.588 { 00:18:33.588 "params": { 00:18:33.588 "name": "Nvme$subsystem", 00:18:33.588 "trtype": "$TEST_TRANSPORT", 00:18:33.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.588 "adrfam": "ipv4", 00:18:33.588 "trsvcid": "$NVMF_PORT", 00:18:33.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.588 "hdgst": ${hdgst:-false}, 00:18:33.588 "ddgst": ${ddgst:-false} 00:18:33.588 }, 00:18:33.588 "method": "bdev_nvme_attach_controller" 00:18:33.588 } 00:18:33.588 EOF 00:18:33.588 )") 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=624404 00:18:33.588 23:17:39 -- nvmf/common.sh@542 -- # cat 00:18:33.588 23:17:39 -- target/bdev_io_wait.sh@35 -- # sync 00:18:33.588 23:17:39 -- nvmf/common.sh@520 -- # config=() 00:18:33.588 23:17:39 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.588 23:17:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.588 23:17:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.588 { 00:18:33.588 "params": { 00:18:33.588 "name": "Nvme$subsystem", 00:18:33.589 "trtype": "$TEST_TRANSPORT", 00:18:33.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.589 "adrfam": "ipv4", 00:18:33.589 "trsvcid": "$NVMF_PORT", 00:18:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.589 "hdgst": ${hdgst:-false}, 00:18:33.589 "ddgst": ${ddgst:-false} 00:18:33.589 }, 00:18:33.589 "method": "bdev_nvme_attach_controller" 00:18:33.589 } 00:18:33.589 EOF 00:18:33.589 )") 00:18:33.589 23:17:39 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:33.589 23:17:39 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:33.589 23:17:39 -- nvmf/common.sh@520 -- # config=() 00:18:33.589 23:17:39 -- nvmf/common.sh@542 -- # cat 00:18:33.589 23:17:39 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.589 23:17:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.589 23:17:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.589 { 00:18:33.589 "params": { 00:18:33.589 "name": "Nvme$subsystem", 00:18:33.589 "trtype": "$TEST_TRANSPORT", 00:18:33.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.589 "adrfam": "ipv4", 00:18:33.589 "trsvcid": "$NVMF_PORT", 00:18:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.589 "hdgst": ${hdgst:-false}, 00:18:33.589 "ddgst": ${ddgst:-false} 00:18:33.589 }, 00:18:33.589 "method": "bdev_nvme_attach_controller" 00:18:33.589 } 00:18:33.589 EOF 00:18:33.589 )") 00:18:33.589 23:17:39 -- nvmf/common.sh@542 -- # cat 00:18:33.589 23:17:39 -- target/bdev_io_wait.sh@37 -- # wait 624397 00:18:33.589 23:17:39 -- nvmf/common.sh@542 -- # cat 00:18:33.589 23:17:39 -- nvmf/common.sh@544 -- # jq . 00:18:33.589 23:17:39 -- nvmf/common.sh@544 -- # jq . 00:18:33.589 23:17:39 -- nvmf/common.sh@544 -- # jq . 00:18:33.589 23:17:39 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.589 23:17:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.589 "params": { 00:18:33.589 "name": "Nvme1", 00:18:33.589 "trtype": "rdma", 00:18:33.589 "traddr": "192.168.100.8", 00:18:33.589 "adrfam": "ipv4", 00:18:33.589 "trsvcid": "4420", 00:18:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.589 "hdgst": false, 00:18:33.589 "ddgst": false 00:18:33.589 }, 00:18:33.589 "method": "bdev_nvme_attach_controller" 00:18:33.589 }' 00:18:33.589 23:17:39 -- nvmf/common.sh@544 -- # jq . 00:18:33.589 23:17:39 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.589 23:17:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.589 "params": { 00:18:33.589 "name": "Nvme1", 00:18:33.589 "trtype": "rdma", 00:18:33.589 "traddr": "192.168.100.8", 00:18:33.589 "adrfam": "ipv4", 00:18:33.589 "trsvcid": "4420", 00:18:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.589 "hdgst": false, 00:18:33.589 "ddgst": false 00:18:33.589 }, 00:18:33.589 "method": "bdev_nvme_attach_controller" 00:18:33.589 }' 00:18:33.589 23:17:39 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.589 23:17:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.589 "params": { 00:18:33.589 "name": "Nvme1", 00:18:33.589 "trtype": "rdma", 00:18:33.589 "traddr": "192.168.100.8", 00:18:33.589 "adrfam": "ipv4", 00:18:33.589 "trsvcid": "4420", 00:18:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.589 "hdgst": false, 00:18:33.589 "ddgst": false 00:18:33.589 }, 00:18:33.589 "method": "bdev_nvme_attach_controller" 00:18:33.589 }' 00:18:33.589 23:17:39 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.589 23:17:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.589 "params": { 00:18:33.589 "name": "Nvme1", 00:18:33.589 "trtype": "rdma", 00:18:33.589 "traddr": "192.168.100.8", 00:18:33.589 "adrfam": "ipv4", 00:18:33.589 "trsvcid": "4420", 00:18:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.589 "hdgst": false, 00:18:33.589 "ddgst": false 00:18:33.589 }, 00:18:33.589 "method": "bdev_nvme_attach_controller" 00:18:33.589 }' 00:18:33.589 [2024-11-02 23:17:39.174486] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:33.589 [2024-11-02 23:17:39.174538] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:33.589 [2024-11-02 23:17:39.175518] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:33.589 [2024-11-02 23:17:39.175564] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:33.589 [2024-11-02 23:17:39.177928] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:33.589 [2024-11-02 23:17:39.177929] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:33.589 [2024-11-02 23:17:39.177983] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 23:17:39.177984] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:33.589 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:33.589 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.589 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.848 [2024-11-02 23:17:39.366401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.848 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.848 [2024-11-02 23:17:39.438523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:33.848 [2024-11-02 23:17:39.459169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.848 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.848 [2024-11-02 23:17:39.521888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.848 [2024-11-02 23:17:39.539813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:33.848 [2024-11-02 23:17:39.588611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:34.107 [2024-11-02 23:17:39.627888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.107 [2024-11-02 23:17:39.710576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:34.107 Running I/O for 1 seconds... 00:18:34.107 Running I/O for 1 seconds... 00:18:34.107 Running I/O for 1 seconds... 00:18:34.107 Running I/O for 1 seconds... 00:18:35.043 00:18:35.043 Latency(us) 00:18:35.043 [2024-11-02T22:17:40.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.043 [2024-11-02T22:17:40.800Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:35.043 Nvme1n1 : 1.00 20105.75 78.54 0.00 0.00 6349.83 3237.48 15518.92 00:18:35.043 [2024-11-02T22:17:40.800Z] =================================================================================================================== 00:18:35.043 [2024-11-02T22:17:40.800Z] Total : 20105.75 78.54 0.00 0.00 6349.83 3237.48 15518.92 00:18:35.043 00:18:35.043 Latency(us) 00:18:35.044 [2024-11-02T22:17:40.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.044 [2024-11-02T22:17:40.801Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:35.044 Nvme1n1 : 1.00 266199.85 1039.84 0.00 0.00 479.68 190.05 1835.01 00:18:35.044 [2024-11-02T22:17:40.801Z] =================================================================================================================== 00:18:35.044 [2024-11-02T22:17:40.801Z] Total : 266199.85 1039.84 0.00 0.00 479.68 190.05 1835.01 00:18:35.044 00:18:35.044 Latency(us) 00:18:35.044 [2024-11-02T22:17:40.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.044 [2024-11-02T22:17:40.801Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:35.044 Nvme1n1 : 1.01 15647.55 61.12 0.00 0.00 8154.92 5295.31 16567.50 00:18:35.044 [2024-11-02T22:17:40.801Z] =================================================================================================================== 00:18:35.044 [2024-11-02T22:17:40.801Z] Total : 15647.55 61.12 0.00 0.00 8154.92 5295.31 16567.50 00:18:35.302 00:18:35.302 Latency(us) 00:18:35.302 [2024-11-02T22:17:41.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.303 [2024-11-02T22:17:41.060Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:35.303 Nvme1n1 : 1.00 15682.20 61.26 0.00 0.00 8143.53 3801.09 18454.94 00:18:35.303 [2024-11-02T22:17:41.060Z] =================================================================================================================== 00:18:35.303 [2024-11-02T22:17:41.060Z] Total : 15682.20 61.26 0.00 0.00 8143.53 3801.09 18454.94 00:18:35.562 23:17:41 -- target/bdev_io_wait.sh@38 -- # wait 624399 00:18:35.562 23:17:41 -- target/bdev_io_wait.sh@39 -- # wait 624401 00:18:35.562 23:17:41 -- target/bdev_io_wait.sh@40 -- # wait 624404 00:18:35.562 23:17:41 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.562 23:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.562 23:17:41 -- common/autotest_common.sh@10 -- # set +x 00:18:35.562 23:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.562 23:17:41 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:35.562 23:17:41 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:35.562 23:17:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.562 23:17:41 -- nvmf/common.sh@116 -- # sync 00:18:35.562 23:17:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:35.562 23:17:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:35.562 23:17:41 -- nvmf/common.sh@119 -- # set +e 00:18:35.562 23:17:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.562 23:17:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:35.562 rmmod nvme_rdma 00:18:35.562 rmmod nvme_fabrics 00:18:35.562 23:17:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.562 23:17:41 -- nvmf/common.sh@123 -- # set -e 00:18:35.562 23:17:41 -- nvmf/common.sh@124 -- # return 0 00:18:35.562 23:17:41 -- nvmf/common.sh@477 -- # '[' -n 624257 ']' 00:18:35.562 23:17:41 -- nvmf/common.sh@478 -- # killprocess 624257 00:18:35.562 23:17:41 -- common/autotest_common.sh@926 -- # '[' -z 624257 ']' 00:18:35.562 23:17:41 -- common/autotest_common.sh@930 -- # kill -0 624257 00:18:35.562 23:17:41 -- common/autotest_common.sh@931 -- # uname 00:18:35.562 23:17:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:35.562 23:17:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 624257 00:18:35.821 23:17:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:35.821 23:17:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:35.821 23:17:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 624257' 00:18:35.821 killing process with pid 624257 00:18:35.821 23:17:41 -- common/autotest_common.sh@945 -- # kill 624257 00:18:35.821 23:17:41 -- common/autotest_common.sh@950 -- # wait 624257 00:18:36.081 23:17:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:36.081 23:17:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:36.081 00:18:36.081 real 0m10.383s 00:18:36.081 user 0m21.411s 00:18:36.081 sys 0m6.398s 00:18:36.081 23:17:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:36.081 23:17:41 -- common/autotest_common.sh@10 -- # set +x 00:18:36.081 ************************************ 00:18:36.081 END TEST nvmf_bdev_io_wait 00:18:36.081 ************************************ 00:18:36.081 23:17:41 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:36.081 23:17:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:36.081 23:17:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:36.081 23:17:41 -- common/autotest_common.sh@10 -- # set +x 00:18:36.081 ************************************ 00:18:36.081 START TEST nvmf_queue_depth 00:18:36.081 ************************************ 00:18:36.081 23:17:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:36.081 * Looking for test storage... 00:18:36.081 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:36.081 23:17:41 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.081 23:17:41 -- nvmf/common.sh@7 -- # uname -s 00:18:36.081 23:17:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.081 23:17:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.081 23:17:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.081 23:17:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.081 23:17:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.081 23:17:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.081 23:17:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.081 23:17:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.081 23:17:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.081 23:17:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.081 23:17:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:36.081 23:17:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:36.081 23:17:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.081 23:17:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.081 23:17:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.081 23:17:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:36.081 23:17:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.081 23:17:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.081 23:17:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.081 23:17:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.081 23:17:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.081 23:17:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.081 23:17:41 -- paths/export.sh@5 -- # export PATH 00:18:36.081 23:17:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.081 23:17:41 -- nvmf/common.sh@46 -- # : 0 00:18:36.081 23:17:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:36.081 23:17:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:36.081 23:17:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:36.081 23:17:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.081 23:17:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.081 23:17:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:36.081 23:17:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:36.081 23:17:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:36.081 23:17:41 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:36.081 23:17:41 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:36.081 23:17:41 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.081 23:17:41 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:36.081 23:17:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:36.081 23:17:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.081 23:17:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:36.081 23:17:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:36.081 23:17:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:36.081 23:17:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.081 23:17:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.081 23:17:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.081 23:17:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:36.081 23:17:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:36.081 23:17:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:36.081 23:17:41 -- common/autotest_common.sh@10 -- # set +x 00:18:42.664 23:17:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:42.664 23:17:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:42.664 23:17:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:42.664 23:17:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:42.664 23:17:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:42.664 23:17:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:42.664 23:17:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:42.664 23:17:48 -- nvmf/common.sh@294 -- # net_devs=() 00:18:42.664 23:17:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:42.664 23:17:48 -- nvmf/common.sh@295 -- # e810=() 00:18:42.664 23:17:48 -- nvmf/common.sh@295 -- # local -ga e810 00:18:42.664 23:17:48 -- nvmf/common.sh@296 -- # x722=() 00:18:42.664 23:17:48 -- nvmf/common.sh@296 -- # local -ga x722 00:18:42.664 23:17:48 -- nvmf/common.sh@297 -- # mlx=() 00:18:42.664 23:17:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:42.664 23:17:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.664 23:17:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:42.664 23:17:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:42.664 23:17:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:42.664 23:17:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:42.664 23:17:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:42.664 23:17:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:42.664 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:42.664 23:17:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.664 23:17:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:42.664 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:42.664 23:17:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.664 23:17:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:42.664 23:17:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.664 23:17:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.664 23:17:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.664 23:17:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:42.664 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:42.664 23:17:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.664 23:17:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.664 23:17:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.664 23:17:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.664 23:17:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:42.664 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:42.664 23:17:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.664 23:17:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:42.664 23:17:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:42.664 23:17:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:42.664 23:17:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:42.664 23:17:48 -- nvmf/common.sh@57 -- # uname 00:18:42.664 23:17:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:42.664 23:17:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:42.664 23:17:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:42.664 23:17:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:42.664 23:17:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:42.664 23:17:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:42.664 23:17:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:42.664 23:17:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:42.664 23:17:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:42.664 23:17:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:42.664 23:17:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:42.664 23:17:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.664 23:17:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:42.664 23:17:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:42.664 23:17:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.664 23:17:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:42.664 23:17:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:42.664 23:17:48 -- nvmf/common.sh@104 -- # continue 2 00:18:42.664 23:17:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:42.664 23:17:48 -- nvmf/common.sh@104 -- # continue 2 00:18:42.664 23:17:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:42.664 23:17:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:42.664 23:17:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:42.664 23:17:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:42.664 23:17:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.664 23:17:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.664 23:17:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:42.664 23:17:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:42.664 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:42.664 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:42.664 altname enp217s0f0np0 00:18:42.664 altname ens818f0np0 00:18:42.664 inet 192.168.100.8/24 scope global mlx_0_0 00:18:42.664 valid_lft forever preferred_lft forever 00:18:42.664 23:17:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:42.664 23:17:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:42.664 23:17:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:42.664 23:17:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:42.664 23:17:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.664 23:17:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.664 23:17:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:42.664 23:17:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:42.664 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:42.664 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:42.664 altname enp217s0f1np1 00:18:42.664 altname ens818f1np1 00:18:42.664 inet 192.168.100.9/24 scope global mlx_0_1 00:18:42.664 valid_lft forever preferred_lft forever 00:18:42.664 23:17:48 -- nvmf/common.sh@410 -- # return 0 00:18:42.664 23:17:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:42.664 23:17:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:42.664 23:17:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:42.664 23:17:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:42.664 23:17:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.664 23:17:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:42.664 23:17:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:42.664 23:17:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.664 23:17:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:42.664 23:17:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.664 23:17:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:42.664 23:17:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:42.664 23:17:48 -- nvmf/common.sh@104 -- # continue 2 00:18:42.665 23:17:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.665 23:17:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.665 23:17:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:42.665 23:17:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.665 23:17:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:42.665 23:17:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:42.665 23:17:48 -- nvmf/common.sh@104 -- # continue 2 00:18:42.665 23:17:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:42.665 23:17:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:42.665 23:17:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:42.665 23:17:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:42.665 23:17:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.665 23:17:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.665 23:17:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:42.665 23:17:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:42.665 23:17:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:42.665 23:17:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:42.665 23:17:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.665 23:17:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.665 23:17:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:42.665 192.168.100.9' 00:18:42.665 23:17:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:42.665 192.168.100.9' 00:18:42.665 23:17:48 -- nvmf/common.sh@445 -- # head -n 1 00:18:42.927 23:17:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:42.927 23:17:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:42.927 192.168.100.9' 00:18:42.927 23:17:48 -- nvmf/common.sh@446 -- # tail -n +2 00:18:42.927 23:17:48 -- nvmf/common.sh@446 -- # head -n 1 00:18:42.927 23:17:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:42.927 23:17:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:42.927 23:17:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:42.927 23:17:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:42.927 23:17:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:42.928 23:17:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:42.928 23:17:48 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:42.928 23:17:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:42.928 23:17:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:42.928 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:42.928 23:17:48 -- nvmf/common.sh@469 -- # nvmfpid=628128 00:18:42.928 23:17:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:42.928 23:17:48 -- nvmf/common.sh@470 -- # waitforlisten 628128 00:18:42.928 23:17:48 -- common/autotest_common.sh@819 -- # '[' -z 628128 ']' 00:18:42.928 23:17:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.928 23:17:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.928 23:17:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.928 23:17:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.928 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:42.928 [2024-11-02 23:17:48.519138] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:42.928 [2024-11-02 23:17:48.519197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.928 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.928 [2024-11-02 23:17:48.589340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.928 [2024-11-02 23:17:48.658203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.928 [2024-11-02 23:17:48.658318] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.928 [2024-11-02 23:17:48.658328] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.928 [2024-11-02 23:17:48.658338] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.928 [2024-11-02 23:17:48.658359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.616 23:17:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.616 23:17:49 -- common/autotest_common.sh@852 -- # return 0 00:18:43.616 23:17:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:43.616 23:17:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:43.616 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:18:43.616 23:17:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.616 23:17:49 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:43.616 23:17:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.875 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:18:43.875 [2024-11-02 23:17:49.399074] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cd9230/0x1cdd720) succeed. 00:18:43.875 [2024-11-02 23:17:49.407651] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cda730/0x1d1edc0) succeed. 00:18:43.875 23:17:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.875 23:17:49 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:43.875 23:17:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.875 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:18:43.875 Malloc0 00:18:43.875 23:17:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.875 23:17:49 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:43.875 23:17:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.875 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:18:43.875 23:17:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.875 23:17:49 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:43.875 23:17:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.875 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:18:43.875 23:17:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.875 23:17:49 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:43.875 23:17:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.875 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:18:43.875 [2024-11-02 23:17:49.494943] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:43.875 23:17:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.875 23:17:49 -- target/queue_depth.sh@30 -- # bdevperf_pid=628415 00:18:43.875 23:17:49 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:43.875 23:17:49 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.875 23:17:49 -- target/queue_depth.sh@33 -- # waitforlisten 628415 /var/tmp/bdevperf.sock 00:18:43.875 23:17:49 -- common/autotest_common.sh@819 -- # '[' -z 628415 ']' 00:18:43.875 23:17:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.875 23:17:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.875 23:17:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.875 23:17:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.875 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:18:43.875 [2024-11-02 23:17:49.545441] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:43.875 [2024-11-02 23:17:49.545486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628415 ] 00:18:43.875 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.875 [2024-11-02 23:17:49.614063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.134 [2024-11-02 23:17:49.687928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.701 23:17:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:44.701 23:17:50 -- common/autotest_common.sh@852 -- # return 0 00:18:44.701 23:17:50 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:44.701 23:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.701 23:17:50 -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 NVMe0n1 00:18:44.701 23:17:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.701 23:17:50 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.960 Running I/O for 10 seconds... 00:18:54.941 00:18:54.941 Latency(us) 00:18:54.941 [2024-11-02T22:18:00.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.941 [2024-11-02T22:18:00.698Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:54.941 Verification LBA range: start 0x0 length 0x4000 00:18:54.941 NVMe0n1 : 10.03 29424.87 114.94 0.00 0.00 34721.13 7864.32 31457.28 00:18:54.941 [2024-11-02T22:18:00.698Z] =================================================================================================================== 00:18:54.941 [2024-11-02T22:18:00.698Z] Total : 29424.87 114.94 0.00 0.00 34721.13 7864.32 31457.28 00:18:54.941 0 00:18:54.941 23:18:00 -- target/queue_depth.sh@39 -- # killprocess 628415 00:18:54.941 23:18:00 -- common/autotest_common.sh@926 -- # '[' -z 628415 ']' 00:18:54.941 23:18:00 -- common/autotest_common.sh@930 -- # kill -0 628415 00:18:54.941 23:18:00 -- common/autotest_common.sh@931 -- # uname 00:18:54.941 23:18:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:54.941 23:18:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 628415 00:18:54.941 23:18:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:54.941 23:18:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:54.941 23:18:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 628415' 00:18:54.941 killing process with pid 628415 00:18:54.941 23:18:00 -- common/autotest_common.sh@945 -- # kill 628415 00:18:54.941 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.941 00:18:54.941 Latency(us) 00:18:54.941 [2024-11-02T22:18:00.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.941 [2024-11-02T22:18:00.698Z] =================================================================================================================== 00:18:54.941 [2024-11-02T22:18:00.698Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.941 23:18:00 -- common/autotest_common.sh@950 -- # wait 628415 00:18:55.200 23:18:00 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:55.200 23:18:00 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:55.200 23:18:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:55.200 23:18:00 -- nvmf/common.sh@116 -- # sync 00:18:55.200 23:18:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:55.200 23:18:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:55.200 23:18:00 -- nvmf/common.sh@119 -- # set +e 00:18:55.200 23:18:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:55.200 23:18:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:55.200 rmmod nvme_rdma 00:18:55.200 rmmod nvme_fabrics 00:18:55.200 23:18:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:55.200 23:18:00 -- nvmf/common.sh@123 -- # set -e 00:18:55.201 23:18:00 -- nvmf/common.sh@124 -- # return 0 00:18:55.201 23:18:00 -- nvmf/common.sh@477 -- # '[' -n 628128 ']' 00:18:55.201 23:18:00 -- nvmf/common.sh@478 -- # killprocess 628128 00:18:55.201 23:18:00 -- common/autotest_common.sh@926 -- # '[' -z 628128 ']' 00:18:55.201 23:18:00 -- common/autotest_common.sh@930 -- # kill -0 628128 00:18:55.201 23:18:00 -- common/autotest_common.sh@931 -- # uname 00:18:55.460 23:18:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:55.460 23:18:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 628128 00:18:55.460 23:18:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:55.460 23:18:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:55.460 23:18:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 628128' 00:18:55.460 killing process with pid 628128 00:18:55.460 23:18:01 -- common/autotest_common.sh@945 -- # kill 628128 00:18:55.460 23:18:01 -- common/autotest_common.sh@950 -- # wait 628128 00:18:55.719 23:18:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:55.719 23:18:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:55.719 00:18:55.719 real 0m19.636s 00:18:55.719 user 0m26.352s 00:18:55.719 sys 0m5.800s 00:18:55.719 23:18:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.719 23:18:01 -- common/autotest_common.sh@10 -- # set +x 00:18:55.719 ************************************ 00:18:55.719 END TEST nvmf_queue_depth 00:18:55.719 ************************************ 00:18:55.719 23:18:01 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:55.719 23:18:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:55.719 23:18:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:55.720 23:18:01 -- common/autotest_common.sh@10 -- # set +x 00:18:55.720 ************************************ 00:18:55.720 START TEST nvmf_multipath 00:18:55.720 ************************************ 00:18:55.720 23:18:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:55.720 * Looking for test storage... 00:18:55.720 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:55.720 23:18:01 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.720 23:18:01 -- nvmf/common.sh@7 -- # uname -s 00:18:55.720 23:18:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.720 23:18:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.720 23:18:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.720 23:18:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.720 23:18:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.720 23:18:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.720 23:18:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.720 23:18:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.720 23:18:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.720 23:18:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.720 23:18:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:55.720 23:18:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:55.720 23:18:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.720 23:18:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.720 23:18:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.720 23:18:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:55.720 23:18:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.720 23:18:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.720 23:18:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.720 23:18:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.720 23:18:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.720 23:18:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.720 23:18:01 -- paths/export.sh@5 -- # export PATH 00:18:55.720 23:18:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.720 23:18:01 -- nvmf/common.sh@46 -- # : 0 00:18:55.720 23:18:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:55.720 23:18:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:55.720 23:18:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:55.720 23:18:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.720 23:18:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.720 23:18:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:55.720 23:18:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:55.720 23:18:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:55.720 23:18:01 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.720 23:18:01 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:55.720 23:18:01 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:55.720 23:18:01 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:55.720 23:18:01 -- target/multipath.sh@43 -- # nvmftestinit 00:18:55.720 23:18:01 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:55.720 23:18:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.720 23:18:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:55.720 23:18:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:55.720 23:18:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:55.720 23:18:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.720 23:18:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.720 23:18:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.720 23:18:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:55.720 23:18:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:55.720 23:18:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:55.720 23:18:01 -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 23:18:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:02.293 23:18:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:02.293 23:18:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:02.293 23:18:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:02.293 23:18:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:02.293 23:18:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:02.293 23:18:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:02.293 23:18:07 -- nvmf/common.sh@294 -- # net_devs=() 00:19:02.293 23:18:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:02.293 23:18:07 -- nvmf/common.sh@295 -- # e810=() 00:19:02.293 23:18:07 -- nvmf/common.sh@295 -- # local -ga e810 00:19:02.293 23:18:07 -- nvmf/common.sh@296 -- # x722=() 00:19:02.293 23:18:07 -- nvmf/common.sh@296 -- # local -ga x722 00:19:02.293 23:18:07 -- nvmf/common.sh@297 -- # mlx=() 00:19:02.293 23:18:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:02.293 23:18:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.293 23:18:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:02.293 23:18:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:02.293 23:18:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:02.293 23:18:07 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:02.293 23:18:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:02.293 23:18:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:02.293 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:02.293 23:18:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:02.293 23:18:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:02.293 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:02.293 23:18:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:02.293 23:18:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:02.293 23:18:07 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.293 23:18:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:02.293 23:18:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.293 23:18:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:02.293 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:02.293 23:18:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.293 23:18:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.293 23:18:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:02.293 23:18:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.293 23:18:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:02.293 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:02.293 23:18:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.293 23:18:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:02.293 23:18:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:02.293 23:18:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:02.293 23:18:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:02.293 23:18:07 -- nvmf/common.sh@57 -- # uname 00:19:02.293 23:18:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:02.293 23:18:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:02.293 23:18:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:02.293 23:18:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:02.293 23:18:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:02.293 23:18:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:02.293 23:18:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:02.293 23:18:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:02.293 23:18:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:02.293 23:18:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:02.293 23:18:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:02.293 23:18:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:02.293 23:18:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:02.293 23:18:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:02.293 23:18:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:02.293 23:18:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:02.293 23:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:02.293 23:18:07 -- nvmf/common.sh@104 -- # continue 2 00:19:02.293 23:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.293 23:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:02.293 23:18:07 -- nvmf/common.sh@104 -- # continue 2 00:19:02.293 23:18:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:02.293 23:18:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:02.293 23:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:02.293 23:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:02.293 23:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:02.293 23:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:02.293 23:18:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:02.293 23:18:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:02.293 23:18:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:02.293 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:02.293 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:02.293 altname enp217s0f0np0 00:19:02.293 altname ens818f0np0 00:19:02.293 inet 192.168.100.8/24 scope global mlx_0_0 00:19:02.293 valid_lft forever preferred_lft forever 00:19:02.293 23:18:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:02.293 23:18:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:02.293 23:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:02.293 23:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:02.294 23:18:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:02.294 23:18:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:02.294 23:18:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:02.294 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:02.294 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:02.294 altname enp217s0f1np1 00:19:02.294 altname ens818f1np1 00:19:02.294 inet 192.168.100.9/24 scope global mlx_0_1 00:19:02.294 valid_lft forever preferred_lft forever 00:19:02.294 23:18:07 -- nvmf/common.sh@410 -- # return 0 00:19:02.294 23:18:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:02.294 23:18:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:02.294 23:18:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:02.294 23:18:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:02.294 23:18:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:02.294 23:18:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:02.294 23:18:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:02.294 23:18:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:02.294 23:18:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:02.294 23:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:02.294 23:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.294 23:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:02.294 23:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:02.294 23:18:07 -- nvmf/common.sh@104 -- # continue 2 00:19:02.294 23:18:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:02.294 23:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.294 23:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:02.294 23:18:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:02.294 23:18:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:02.294 23:18:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:02.294 23:18:07 -- nvmf/common.sh@104 -- # continue 2 00:19:02.294 23:18:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:02.294 23:18:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:02.294 23:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:02.294 23:18:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:02.294 23:18:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:02.294 23:18:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:02.294 23:18:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:02.294 23:18:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:02.294 192.168.100.9' 00:19:02.294 23:18:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:02.294 192.168.100.9' 00:19:02.294 23:18:07 -- nvmf/common.sh@445 -- # head -n 1 00:19:02.294 23:18:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:02.294 23:18:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:02.294 192.168.100.9' 00:19:02.294 23:18:07 -- nvmf/common.sh@446 -- # tail -n +2 00:19:02.294 23:18:07 -- nvmf/common.sh@446 -- # head -n 1 00:19:02.294 23:18:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:02.294 23:18:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:02.294 23:18:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:02.294 23:18:07 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:02.294 23:18:07 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:02.294 23:18:07 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:02.294 run this test only with TCP transport for now 00:19:02.294 23:18:07 -- target/multipath.sh@53 -- # nvmftestfini 00:19:02.294 23:18:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:02.294 23:18:07 -- nvmf/common.sh@116 -- # sync 00:19:02.294 23:18:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@119 -- # set +e 00:19:02.294 23:18:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:02.294 23:18:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:02.294 rmmod nvme_rdma 00:19:02.294 rmmod nvme_fabrics 00:19:02.294 23:18:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:02.294 23:18:07 -- nvmf/common.sh@123 -- # set -e 00:19:02.294 23:18:07 -- nvmf/common.sh@124 -- # return 0 00:19:02.294 23:18:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:02.294 23:18:07 -- target/multipath.sh@54 -- # exit 0 00:19:02.294 23:18:07 -- target/multipath.sh@1 -- # nvmftestfini 00:19:02.294 23:18:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:02.294 23:18:07 -- nvmf/common.sh@116 -- # sync 00:19:02.294 23:18:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@119 -- # set +e 00:19:02.294 23:18:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:02.294 23:18:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:02.294 23:18:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:02.294 23:18:07 -- nvmf/common.sh@123 -- # set -e 00:19:02.294 23:18:07 -- nvmf/common.sh@124 -- # return 0 00:19:02.294 23:18:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:02.294 23:18:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:02.294 00:19:02.294 real 0m6.560s 00:19:02.294 user 0m1.794s 00:19:02.294 sys 0m4.946s 00:19:02.294 23:18:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.294 23:18:07 -- common/autotest_common.sh@10 -- # set +x 00:19:02.294 ************************************ 00:19:02.294 END TEST nvmf_multipath 00:19:02.294 ************************************ 00:19:02.294 23:18:07 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:02.294 23:18:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:02.294 23:18:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:02.294 23:18:07 -- common/autotest_common.sh@10 -- # set +x 00:19:02.294 ************************************ 00:19:02.294 START TEST nvmf_zcopy 00:19:02.294 ************************************ 00:19:02.294 23:18:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:02.294 * Looking for test storage... 00:19:02.294 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:02.294 23:18:08 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.294 23:18:08 -- nvmf/common.sh@7 -- # uname -s 00:19:02.294 23:18:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.294 23:18:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.294 23:18:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.294 23:18:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.294 23:18:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.294 23:18:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.294 23:18:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.294 23:18:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.294 23:18:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.294 23:18:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.294 23:18:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:02.294 23:18:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:02.294 23:18:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.294 23:18:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.294 23:18:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.294 23:18:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:02.294 23:18:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.294 23:18:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.294 23:18:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.294 23:18:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.554 23:18:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.554 23:18:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.554 23:18:08 -- paths/export.sh@5 -- # export PATH 00:19:02.554 23:18:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.554 23:18:08 -- nvmf/common.sh@46 -- # : 0 00:19:02.554 23:18:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:02.554 23:18:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:02.554 23:18:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:02.554 23:18:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.554 23:18:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.554 23:18:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:02.554 23:18:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:02.554 23:18:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:02.554 23:18:08 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:02.554 23:18:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:02.554 23:18:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.554 23:18:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:02.554 23:18:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:02.554 23:18:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:02.554 23:18:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.554 23:18:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.554 23:18:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.554 23:18:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:02.554 23:18:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:02.554 23:18:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:02.554 23:18:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.132 23:18:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:09.132 23:18:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:09.132 23:18:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:09.132 23:18:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:09.132 23:18:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:09.132 23:18:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:09.132 23:18:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:09.132 23:18:14 -- nvmf/common.sh@294 -- # net_devs=() 00:19:09.132 23:18:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:09.132 23:18:14 -- nvmf/common.sh@295 -- # e810=() 00:19:09.132 23:18:14 -- nvmf/common.sh@295 -- # local -ga e810 00:19:09.132 23:18:14 -- nvmf/common.sh@296 -- # x722=() 00:19:09.132 23:18:14 -- nvmf/common.sh@296 -- # local -ga x722 00:19:09.132 23:18:14 -- nvmf/common.sh@297 -- # mlx=() 00:19:09.132 23:18:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:09.132 23:18:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.132 23:18:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:09.132 23:18:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:09.132 23:18:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:09.132 23:18:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:09.132 23:18:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:09.132 23:18:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:09.132 23:18:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:09.133 23:18:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:09.133 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:09.133 23:18:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:09.133 23:18:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:09.133 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:09.133 23:18:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:09.133 23:18:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:09.133 23:18:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.133 23:18:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:09.133 23:18:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.133 23:18:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:09.133 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.133 23:18:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.133 23:18:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:09.133 23:18:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.133 23:18:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:09.133 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.133 23:18:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:09.133 23:18:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:09.133 23:18:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:09.133 23:18:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:09.133 23:18:14 -- nvmf/common.sh@57 -- # uname 00:19:09.133 23:18:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:09.133 23:18:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:09.133 23:18:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:09.133 23:18:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:09.133 23:18:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:09.133 23:18:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:09.133 23:18:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:09.133 23:18:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:09.133 23:18:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:09.133 23:18:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:09.133 23:18:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:09.133 23:18:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:09.133 23:18:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:09.133 23:18:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:09.133 23:18:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:09.133 23:18:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:09.133 23:18:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@104 -- # continue 2 00:19:09.133 23:18:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@104 -- # continue 2 00:19:09.133 23:18:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:09.133 23:18:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:09.133 23:18:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:09.133 23:18:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:09.133 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:09.133 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:09.133 altname enp217s0f0np0 00:19:09.133 altname ens818f0np0 00:19:09.133 inet 192.168.100.8/24 scope global mlx_0_0 00:19:09.133 valid_lft forever preferred_lft forever 00:19:09.133 23:18:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:09.133 23:18:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:09.133 23:18:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:09.133 23:18:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:09.133 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:09.133 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:09.133 altname enp217s0f1np1 00:19:09.133 altname ens818f1np1 00:19:09.133 inet 192.168.100.9/24 scope global mlx_0_1 00:19:09.133 valid_lft forever preferred_lft forever 00:19:09.133 23:18:14 -- nvmf/common.sh@410 -- # return 0 00:19:09.133 23:18:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:09.133 23:18:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:09.133 23:18:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:09.133 23:18:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:09.133 23:18:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:09.133 23:18:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:09.133 23:18:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:09.133 23:18:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:09.133 23:18:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:09.133 23:18:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@104 -- # continue 2 00:19:09.133 23:18:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.133 23:18:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:09.133 23:18:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@104 -- # continue 2 00:19:09.133 23:18:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:09.133 23:18:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:09.133 23:18:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:09.133 23:18:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:09.133 23:18:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:09.133 23:18:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:09.133 192.168.100.9' 00:19:09.133 23:18:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:09.133 192.168.100.9' 00:19:09.133 23:18:14 -- nvmf/common.sh@445 -- # head -n 1 00:19:09.133 23:18:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:09.133 23:18:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:09.133 192.168.100.9' 00:19:09.133 23:18:14 -- nvmf/common.sh@446 -- # tail -n +2 00:19:09.133 23:18:14 -- nvmf/common.sh@446 -- # head -n 1 00:19:09.134 23:18:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:09.134 23:18:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:09.134 23:18:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:09.134 23:18:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:09.134 23:18:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:09.134 23:18:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:09.134 23:18:14 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:09.134 23:18:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:09.134 23:18:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:09.134 23:18:14 -- common/autotest_common.sh@10 -- # set +x 00:19:09.134 23:18:14 -- nvmf/common.sh@469 -- # nvmfpid=637446 00:19:09.134 23:18:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:09.134 23:18:14 -- nvmf/common.sh@470 -- # waitforlisten 637446 00:19:09.134 23:18:14 -- common/autotest_common.sh@819 -- # '[' -z 637446 ']' 00:19:09.134 23:18:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.134 23:18:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:09.134 23:18:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.134 23:18:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:09.134 23:18:14 -- common/autotest_common.sh@10 -- # set +x 00:19:09.393 [2024-11-02 23:18:14.907429] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:09.393 [2024-11-02 23:18:14.907486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.393 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.393 [2024-11-02 23:18:14.977601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.393 [2024-11-02 23:18:15.045234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:09.393 [2024-11-02 23:18:15.045347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.393 [2024-11-02 23:18:15.045356] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.393 [2024-11-02 23:18:15.045364] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.393 [2024-11-02 23:18:15.045384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.961 23:18:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:09.961 23:18:15 -- common/autotest_common.sh@852 -- # return 0 00:19:09.961 23:18:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:09.961 23:18:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:09.961 23:18:15 -- common/autotest_common.sh@10 -- # set +x 00:19:10.220 23:18:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.220 23:18:15 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:10.220 23:18:15 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:10.220 Unsupported transport: rdma 00:19:10.220 23:18:15 -- target/zcopy.sh@17 -- # exit 0 00:19:10.220 23:18:15 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:10.220 23:18:15 -- common/autotest_common.sh@796 -- # type=--id 00:19:10.220 23:18:15 -- common/autotest_common.sh@797 -- # id=0 00:19:10.220 23:18:15 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:10.220 23:18:15 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:10.220 23:18:15 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:10.220 23:18:15 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:10.220 23:18:15 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:10.220 23:18:15 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:10.220 nvmf_trace.0 00:19:10.220 23:18:15 -- common/autotest_common.sh@811 -- # return 0 00:19:10.220 23:18:15 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:10.220 23:18:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:10.220 23:18:15 -- nvmf/common.sh@116 -- # sync 00:19:10.220 23:18:15 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:10.220 23:18:15 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:10.220 23:18:15 -- nvmf/common.sh@119 -- # set +e 00:19:10.220 23:18:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:10.220 23:18:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:10.220 rmmod nvme_rdma 00:19:10.220 rmmod nvme_fabrics 00:19:10.220 23:18:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:10.220 23:18:15 -- nvmf/common.sh@123 -- # set -e 00:19:10.220 23:18:15 -- nvmf/common.sh@124 -- # return 0 00:19:10.220 23:18:15 -- nvmf/common.sh@477 -- # '[' -n 637446 ']' 00:19:10.220 23:18:15 -- nvmf/common.sh@478 -- # killprocess 637446 00:19:10.220 23:18:15 -- common/autotest_common.sh@926 -- # '[' -z 637446 ']' 00:19:10.220 23:18:15 -- common/autotest_common.sh@930 -- # kill -0 637446 00:19:10.220 23:18:15 -- common/autotest_common.sh@931 -- # uname 00:19:10.220 23:18:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:10.221 23:18:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 637446 00:19:10.221 23:18:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:10.221 23:18:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:10.221 23:18:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 637446' 00:19:10.221 killing process with pid 637446 00:19:10.221 23:18:15 -- common/autotest_common.sh@945 -- # kill 637446 00:19:10.221 23:18:15 -- common/autotest_common.sh@950 -- # wait 637446 00:19:10.480 23:18:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:10.480 23:18:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:10.480 00:19:10.480 real 0m8.172s 00:19:10.480 user 0m3.448s 00:19:10.480 sys 0m5.476s 00:19:10.480 23:18:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.480 23:18:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.480 ************************************ 00:19:10.480 END TEST nvmf_zcopy 00:19:10.480 ************************************ 00:19:10.480 23:18:16 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:10.480 23:18:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:10.480 23:18:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:10.480 23:18:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.480 ************************************ 00:19:10.480 START TEST nvmf_nmic 00:19:10.480 ************************************ 00:19:10.480 23:18:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:10.480 * Looking for test storage... 00:19:10.740 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:10.740 23:18:16 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.740 23:18:16 -- nvmf/common.sh@7 -- # uname -s 00:19:10.740 23:18:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.740 23:18:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.740 23:18:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.740 23:18:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.740 23:18:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.740 23:18:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.740 23:18:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.740 23:18:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.740 23:18:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.740 23:18:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.740 23:18:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:10.740 23:18:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:10.740 23:18:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.740 23:18:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.740 23:18:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.740 23:18:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:10.740 23:18:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.740 23:18:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.740 23:18:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.740 23:18:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.740 23:18:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.740 23:18:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.740 23:18:16 -- paths/export.sh@5 -- # export PATH 00:19:10.740 23:18:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.740 23:18:16 -- nvmf/common.sh@46 -- # : 0 00:19:10.740 23:18:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:10.740 23:18:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:10.740 23:18:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:10.740 23:18:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.740 23:18:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.740 23:18:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:10.740 23:18:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:10.740 23:18:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:10.740 23:18:16 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.740 23:18:16 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.740 23:18:16 -- target/nmic.sh@14 -- # nvmftestinit 00:19:10.740 23:18:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:10.740 23:18:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.740 23:18:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:10.740 23:18:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:10.740 23:18:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:10.740 23:18:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.740 23:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.740 23:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.740 23:18:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:10.740 23:18:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:10.740 23:18:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:10.740 23:18:16 -- common/autotest_common.sh@10 -- # set +x 00:19:17.313 23:18:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:17.313 23:18:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:17.313 23:18:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:17.313 23:18:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:17.313 23:18:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:17.313 23:18:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:17.313 23:18:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:17.313 23:18:22 -- nvmf/common.sh@294 -- # net_devs=() 00:19:17.313 23:18:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:17.313 23:18:22 -- nvmf/common.sh@295 -- # e810=() 00:19:17.313 23:18:22 -- nvmf/common.sh@295 -- # local -ga e810 00:19:17.313 23:18:22 -- nvmf/common.sh@296 -- # x722=() 00:19:17.313 23:18:22 -- nvmf/common.sh@296 -- # local -ga x722 00:19:17.313 23:18:22 -- nvmf/common.sh@297 -- # mlx=() 00:19:17.313 23:18:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:17.313 23:18:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.313 23:18:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:17.313 23:18:22 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:17.313 23:18:22 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:17.313 23:18:22 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:17.313 23:18:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:17.313 23:18:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:17.313 23:18:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:17.313 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:17.313 23:18:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:17.313 23:18:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:17.313 23:18:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:17.313 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:17.313 23:18:22 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:17.313 23:18:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:17.313 23:18:22 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:17.313 23:18:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.313 23:18:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:17.313 23:18:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.313 23:18:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:17.313 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:17.313 23:18:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.313 23:18:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:17.313 23:18:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.313 23:18:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:17.313 23:18:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.313 23:18:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:17.313 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:17.313 23:18:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.313 23:18:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:17.313 23:18:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:17.313 23:18:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:17.313 23:18:22 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:17.313 23:18:22 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:17.313 23:18:22 -- nvmf/common.sh@57 -- # uname 00:19:17.313 23:18:22 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:17.313 23:18:22 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:17.313 23:18:22 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:17.313 23:18:22 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:17.313 23:18:22 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:17.313 23:18:22 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:17.313 23:18:22 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:17.313 23:18:22 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:17.314 23:18:22 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:17.314 23:18:22 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:17.314 23:18:22 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:17.314 23:18:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:17.314 23:18:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:17.314 23:18:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:17.314 23:18:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:17.314 23:18:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:17.314 23:18:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@104 -- # continue 2 00:19:17.314 23:18:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@104 -- # continue 2 00:19:17.314 23:18:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:17.314 23:18:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:17.314 23:18:22 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:17.314 23:18:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:17.314 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:17.314 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:17.314 altname enp217s0f0np0 00:19:17.314 altname ens818f0np0 00:19:17.314 inet 192.168.100.8/24 scope global mlx_0_0 00:19:17.314 valid_lft forever preferred_lft forever 00:19:17.314 23:18:22 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:17.314 23:18:22 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:17.314 23:18:22 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:17.314 23:18:22 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:17.314 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:17.314 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:17.314 altname enp217s0f1np1 00:19:17.314 altname ens818f1np1 00:19:17.314 inet 192.168.100.9/24 scope global mlx_0_1 00:19:17.314 valid_lft forever preferred_lft forever 00:19:17.314 23:18:22 -- nvmf/common.sh@410 -- # return 0 00:19:17.314 23:18:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:17.314 23:18:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:17.314 23:18:22 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:17.314 23:18:22 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:17.314 23:18:22 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:17.314 23:18:22 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:17.314 23:18:22 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:17.314 23:18:22 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:17.314 23:18:22 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:17.314 23:18:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@104 -- # continue 2 00:19:17.314 23:18:22 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.314 23:18:22 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:17.314 23:18:22 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@104 -- # continue 2 00:19:17.314 23:18:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:17.314 23:18:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:17.314 23:18:22 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:17.314 23:18:22 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:17.314 23:18:22 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:17.314 23:18:22 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:17.314 192.168.100.9' 00:19:17.314 23:18:23 -- nvmf/common.sh@445 -- # head -n 1 00:19:17.314 23:18:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:17.314 192.168.100.9' 00:19:17.314 23:18:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:17.314 23:18:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:17.314 192.168.100.9' 00:19:17.314 23:18:23 -- nvmf/common.sh@446 -- # tail -n +2 00:19:17.314 23:18:23 -- nvmf/common.sh@446 -- # head -n 1 00:19:17.314 23:18:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:17.314 23:18:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:17.314 23:18:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:17.314 23:18:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:17.314 23:18:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:17.314 23:18:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:17.314 23:18:23 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:17.314 23:18:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:17.314 23:18:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:17.314 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:19:17.314 23:18:23 -- nvmf/common.sh@469 -- # nvmfpid=640925 00:19:17.314 23:18:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:17.314 23:18:23 -- nvmf/common.sh@470 -- # waitforlisten 640925 00:19:17.314 23:18:23 -- common/autotest_common.sh@819 -- # '[' -z 640925 ']' 00:19:17.314 23:18:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.314 23:18:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:17.314 23:18:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.314 23:18:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:17.314 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:19:17.574 [2024-11-02 23:18:23.093609] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:17.574 [2024-11-02 23:18:23.093659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.574 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.574 [2024-11-02 23:18:23.163777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.574 [2024-11-02 23:18:23.238757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:17.574 [2024-11-02 23:18:23.238865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.574 [2024-11-02 23:18:23.238875] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.574 [2024-11-02 23:18:23.238885] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.574 [2024-11-02 23:18:23.238931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.574 [2024-11-02 23:18:23.239039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.574 [2024-11-02 23:18:23.239062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.574 [2024-11-02 23:18:23.239063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.512 23:18:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:18.512 23:18:23 -- common/autotest_common.sh@852 -- # return 0 00:19:18.512 23:18:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:18.512 23:18:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:18.512 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 23:18:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.512 23:18:23 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:18.512 23:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.512 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 [2024-11-02 23:18:23.998507] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb0f090/0xb13580) succeed. 00:19:18.512 [2024-11-02 23:18:24.007736] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb10680/0xb54c20) succeed. 00:19:18.512 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.512 23:18:24 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:18.512 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.512 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 Malloc0 00:19:18.512 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.512 23:18:24 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:18.512 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.512 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.512 23:18:24 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.512 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.512 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.512 23:18:24 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:18.512 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.512 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 [2024-11-02 23:18:24.177636] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:18.512 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.512 23:18:24 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:18.512 test case1: single bdev can't be used in multiple subsystems 00:19:18.512 23:18:24 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:18.512 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.512 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.512 23:18:24 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:18.512 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.512 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.512 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.512 23:18:24 -- target/nmic.sh@28 -- # nmic_status=0 00:19:18.513 23:18:24 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:18.513 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.513 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.513 [2024-11-02 23:18:24.201450] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:18.513 [2024-11-02 23:18:24.201471] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:18.513 [2024-11-02 23:18:24.201481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:18.513 request: 00:19:18.513 { 00:19:18.513 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:18.513 "namespace": { 00:19:18.513 "bdev_name": "Malloc0" 00:19:18.513 }, 00:19:18.513 "method": "nvmf_subsystem_add_ns", 00:19:18.513 "req_id": 1 00:19:18.513 } 00:19:18.513 Got JSON-RPC error response 00:19:18.513 response: 00:19:18.513 { 00:19:18.513 "code": -32602, 00:19:18.513 "message": "Invalid parameters" 00:19:18.513 } 00:19:18.513 23:18:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:18.513 23:18:24 -- target/nmic.sh@29 -- # nmic_status=1 00:19:18.513 23:18:24 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:18.513 23:18:24 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:18.513 Adding namespace failed - expected result. 00:19:18.513 23:18:24 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:18.513 test case2: host connect to nvmf target in multiple paths 00:19:18.513 23:18:24 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:18.513 23:18:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.513 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.513 [2024-11-02 23:18:24.213513] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:18.513 23:18:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.513 23:18:24 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:19.452 23:18:25 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:20.831 23:18:26 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:20.831 23:18:26 -- common/autotest_common.sh@1177 -- # local i=0 00:19:20.831 23:18:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:20.831 23:18:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:20.831 23:18:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:22.765 23:18:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:22.765 23:18:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:22.765 23:18:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:22.765 23:18:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:22.765 23:18:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:22.765 23:18:28 -- common/autotest_common.sh@1187 -- # return 0 00:19:22.765 23:18:28 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:22.765 [global] 00:19:22.765 thread=1 00:19:22.765 invalidate=1 00:19:22.765 rw=write 00:19:22.765 time_based=1 00:19:22.765 runtime=1 00:19:22.765 ioengine=libaio 00:19:22.765 direct=1 00:19:22.765 bs=4096 00:19:22.765 iodepth=1 00:19:22.765 norandommap=0 00:19:22.765 numjobs=1 00:19:22.765 00:19:22.765 verify_dump=1 00:19:22.765 verify_backlog=512 00:19:22.765 verify_state_save=0 00:19:22.765 do_verify=1 00:19:22.766 verify=crc32c-intel 00:19:22.766 [job0] 00:19:22.766 filename=/dev/nvme0n1 00:19:22.766 Could not set queue depth (nvme0n1) 00:19:23.023 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.023 fio-3.35 00:19:23.023 Starting 1 thread 00:19:23.956 00:19:23.956 job0: (groupid=0, jobs=1): err= 0: pid=642138: Sat Nov 2 23:18:29 2024 00:19:23.956 read: IOPS=7121, BW=27.8MiB/s (29.2MB/s)(27.8MiB/1001msec) 00:19:23.956 slat (nsec): min=8300, max=30121, avg=8944.39, stdev=843.71 00:19:23.956 clat (nsec): min=46184, max=81395, avg=58097.37, stdev=3563.92 00:19:23.956 lat (usec): min=58, max=102, avg=67.04, stdev= 3.63 00:19:23.956 clat percentiles (nsec): 00:19:23.956 | 1.00th=[51456], 5.00th=[52480], 10.00th=[53504], 20.00th=[55040], 00:19:23.956 | 30.00th=[56064], 40.00th=[57088], 50.00th=[58112], 60.00th=[58624], 00:19:23.956 | 70.00th=[59648], 80.00th=[61184], 90.00th=[62720], 95.00th=[64256], 00:19:23.956 | 99.00th=[67072], 99.50th=[68096], 99.90th=[73216], 99.95th=[79360], 00:19:23.956 | 99.99th=[81408] 00:19:23.956 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:19:23.956 slat (nsec): min=10814, max=44361, avg=11572.92, stdev=1124.55 00:19:23.956 clat (usec): min=41, max=105, avg=56.02, stdev= 3.74 00:19:23.956 lat (usec): min=58, max=138, avg=67.59, stdev= 3.96 00:19:23.956 clat percentiles (usec): 00:19:23.956 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:19:23.956 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 56], 60.00th=[ 57], 00:19:23.956 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 61], 95.00th=[ 63], 00:19:23.956 | 99.00th=[ 66], 99.50th=[ 68], 99.90th=[ 74], 99.95th=[ 84], 00:19:23.956 | 99.99th=[ 106] 00:19:23.956 bw ( KiB/s): min=28928, max=28928, per=100.00%, avg=28928.00, stdev= 0.00, samples=1 00:19:23.956 iops : min= 7232, max= 7232, avg=7232.00, stdev= 0.00, samples=1 00:19:23.956 lat (usec) : 50=1.38%, 100=98.61%, 250=0.01% 00:19:23.956 cpu : usr=11.50%, sys=18.30%, ctx=14297, majf=0, minf=1 00:19:23.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.956 issued rwts: total=7129,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.956 00:19:23.956 Run status group 0 (all jobs): 00:19:23.956 READ: bw=27.8MiB/s (29.2MB/s), 27.8MiB/s-27.8MiB/s (29.2MB/s-29.2MB/s), io=27.8MiB (29.2MB), run=1001-1001msec 00:19:23.956 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:23.956 00:19:23.956 Disk stats (read/write): 00:19:23.956 nvme0n1: ios=6253/6656, merge=0/0, ticks=304/319, in_queue=623, util=90.58% 00:19:23.956 23:18:29 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:25.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:25.855 23:18:31 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:25.855 23:18:31 -- common/autotest_common.sh@1198 -- # local i=0 00:19:25.855 23:18:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:25.855 23:18:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:25.855 23:18:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:25.855 23:18:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:26.114 23:18:31 -- common/autotest_common.sh@1210 -- # return 0 00:19:26.114 23:18:31 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:26.114 23:18:31 -- target/nmic.sh@53 -- # nvmftestfini 00:19:26.114 23:18:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:26.114 23:18:31 -- nvmf/common.sh@116 -- # sync 00:19:26.114 23:18:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:26.114 23:18:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:26.114 23:18:31 -- nvmf/common.sh@119 -- # set +e 00:19:26.114 23:18:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:26.114 23:18:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:26.114 rmmod nvme_rdma 00:19:26.114 rmmod nvme_fabrics 00:19:26.114 23:18:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:26.114 23:18:31 -- nvmf/common.sh@123 -- # set -e 00:19:26.114 23:18:31 -- nvmf/common.sh@124 -- # return 0 00:19:26.114 23:18:31 -- nvmf/common.sh@477 -- # '[' -n 640925 ']' 00:19:26.114 23:18:31 -- nvmf/common.sh@478 -- # killprocess 640925 00:19:26.114 23:18:31 -- common/autotest_common.sh@926 -- # '[' -z 640925 ']' 00:19:26.114 23:18:31 -- common/autotest_common.sh@930 -- # kill -0 640925 00:19:26.114 23:18:31 -- common/autotest_common.sh@931 -- # uname 00:19:26.114 23:18:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:26.114 23:18:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 640925 00:19:26.114 23:18:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:26.114 23:18:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:26.114 23:18:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 640925' 00:19:26.114 killing process with pid 640925 00:19:26.114 23:18:31 -- common/autotest_common.sh@945 -- # kill 640925 00:19:26.114 23:18:31 -- common/autotest_common.sh@950 -- # wait 640925 00:19:26.372 23:18:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.372 23:18:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:26.372 00:19:26.372 real 0m15.880s 00:19:26.372 user 0m45.360s 00:19:26.372 sys 0m6.041s 00:19:26.372 23:18:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.372 23:18:32 -- common/autotest_common.sh@10 -- # set +x 00:19:26.372 ************************************ 00:19:26.372 END TEST nvmf_nmic 00:19:26.372 ************************************ 00:19:26.372 23:18:32 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:26.372 23:18:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:26.372 23:18:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:26.372 23:18:32 -- common/autotest_common.sh@10 -- # set +x 00:19:26.372 ************************************ 00:19:26.372 START TEST nvmf_fio_target 00:19:26.372 ************************************ 00:19:26.372 23:18:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:26.631 * Looking for test storage... 00:19:26.631 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.631 23:18:32 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.631 23:18:32 -- nvmf/common.sh@7 -- # uname -s 00:19:26.631 23:18:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.631 23:18:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.631 23:18:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.631 23:18:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.631 23:18:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.631 23:18:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.631 23:18:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.631 23:18:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.631 23:18:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.631 23:18:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.631 23:18:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.631 23:18:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:26.631 23:18:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.631 23:18:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.631 23:18:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.631 23:18:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.631 23:18:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.631 23:18:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.631 23:18:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.631 23:18:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.631 23:18:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.631 23:18:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.631 23:18:32 -- paths/export.sh@5 -- # export PATH 00:19:26.631 23:18:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.631 23:18:32 -- nvmf/common.sh@46 -- # : 0 00:19:26.631 23:18:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:26.631 23:18:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:26.631 23:18:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:26.631 23:18:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.631 23:18:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.631 23:18:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:26.631 23:18:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:26.631 23:18:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:26.631 23:18:32 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.631 23:18:32 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.631 23:18:32 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:26.631 23:18:32 -- target/fio.sh@16 -- # nvmftestinit 00:19:26.631 23:18:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:26.631 23:18:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.631 23:18:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:26.631 23:18:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:26.631 23:18:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:26.631 23:18:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.631 23:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.631 23:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.631 23:18:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:26.631 23:18:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:26.631 23:18:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:26.631 23:18:32 -- common/autotest_common.sh@10 -- # set +x 00:19:33.193 23:18:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:33.193 23:18:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:33.193 23:18:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:33.193 23:18:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:33.193 23:18:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:33.193 23:18:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:33.193 23:18:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:33.193 23:18:37 -- nvmf/common.sh@294 -- # net_devs=() 00:19:33.193 23:18:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:33.193 23:18:37 -- nvmf/common.sh@295 -- # e810=() 00:19:33.193 23:18:37 -- nvmf/common.sh@295 -- # local -ga e810 00:19:33.193 23:18:37 -- nvmf/common.sh@296 -- # x722=() 00:19:33.193 23:18:37 -- nvmf/common.sh@296 -- # local -ga x722 00:19:33.193 23:18:37 -- nvmf/common.sh@297 -- # mlx=() 00:19:33.193 23:18:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:33.193 23:18:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.193 23:18:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:33.193 23:18:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:33.193 23:18:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:33.193 23:18:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:33.194 23:18:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:33.194 23:18:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:33.194 23:18:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:33.194 23:18:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:33.194 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:33.194 23:18:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.194 23:18:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:33.194 23:18:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:33.194 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:33.194 23:18:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.194 23:18:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:33.194 23:18:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:33.194 23:18:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.194 23:18:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:33.194 23:18:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.194 23:18:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:33.194 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:33.194 23:18:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.194 23:18:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:33.194 23:18:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.194 23:18:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:33.194 23:18:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.194 23:18:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:33.194 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:33.194 23:18:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.194 23:18:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:33.194 23:18:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:33.194 23:18:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:33.194 23:18:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:33.194 23:18:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:33.194 23:18:37 -- nvmf/common.sh@57 -- # uname 00:19:33.194 23:18:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:33.194 23:18:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:33.194 23:18:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:33.194 23:18:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:33.194 23:18:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:33.194 23:18:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:33.194 23:18:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:33.194 23:18:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:33.194 23:18:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:33.194 23:18:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:33.194 23:18:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:33.194 23:18:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.194 23:18:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:33.194 23:18:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:33.194 23:18:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.194 23:18:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:33.194 23:18:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@104 -- # continue 2 00:19:33.194 23:18:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@104 -- # continue 2 00:19:33.194 23:18:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:33.194 23:18:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.194 23:18:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:33.194 23:18:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:33.194 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.194 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:33.194 altname enp217s0f0np0 00:19:33.194 altname ens818f0np0 00:19:33.194 inet 192.168.100.8/24 scope global mlx_0_0 00:19:33.194 valid_lft forever preferred_lft forever 00:19:33.194 23:18:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:33.194 23:18:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.194 23:18:38 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:33.194 23:18:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:33.194 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.194 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:33.194 altname enp217s0f1np1 00:19:33.194 altname ens818f1np1 00:19:33.194 inet 192.168.100.9/24 scope global mlx_0_1 00:19:33.194 valid_lft forever preferred_lft forever 00:19:33.194 23:18:38 -- nvmf/common.sh@410 -- # return 0 00:19:33.194 23:18:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:33.194 23:18:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:33.194 23:18:38 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:33.194 23:18:38 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:33.194 23:18:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.194 23:18:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:33.194 23:18:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:33.194 23:18:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.194 23:18:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:33.194 23:18:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@104 -- # continue 2 00:19:33.194 23:18:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.194 23:18:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.194 23:18:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@104 -- # continue 2 00:19:33.194 23:18:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:33.194 23:18:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.194 23:18:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:33.194 23:18:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:33.194 23:18:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.194 23:18:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:33.194 192.168.100.9' 00:19:33.194 23:18:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:33.194 192.168.100.9' 00:19:33.194 23:18:38 -- nvmf/common.sh@445 -- # head -n 1 00:19:33.194 23:18:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:33.194 23:18:38 -- nvmf/common.sh@446 -- # head -n 1 00:19:33.194 23:18:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:33.194 192.168.100.9' 00:19:33.194 23:18:38 -- nvmf/common.sh@446 -- # tail -n +2 00:19:33.194 23:18:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:33.194 23:18:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:33.194 23:18:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:33.195 23:18:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:33.195 23:18:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:33.195 23:18:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:33.195 23:18:38 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:33.195 23:18:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:33.195 23:18:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:33.195 23:18:38 -- common/autotest_common.sh@10 -- # set +x 00:19:33.195 23:18:38 -- nvmf/common.sh@469 -- # nvmfpid=645892 00:19:33.195 23:18:38 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.195 23:18:38 -- nvmf/common.sh@470 -- # waitforlisten 645892 00:19:33.195 23:18:38 -- common/autotest_common.sh@819 -- # '[' -z 645892 ']' 00:19:33.195 23:18:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.195 23:18:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:33.195 23:18:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.195 23:18:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:33.195 23:18:38 -- common/autotest_common.sh@10 -- # set +x 00:19:33.195 [2024-11-02 23:18:38.246729] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:33.195 [2024-11-02 23:18:38.246776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.195 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.195 [2024-11-02 23:18:38.316321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.195 [2024-11-02 23:18:38.389679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:33.195 [2024-11-02 23:18:38.389811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.195 [2024-11-02 23:18:38.389821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.195 [2024-11-02 23:18:38.389830] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.195 [2024-11-02 23:18:38.389877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.195 [2024-11-02 23:18:38.389975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.195 [2024-11-02 23:18:38.390031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.195 [2024-11-02 23:18:38.390033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.453 23:18:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:33.453 23:18:39 -- common/autotest_common.sh@852 -- # return 0 00:19:33.453 23:18:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:33.453 23:18:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:33.453 23:18:39 -- common/autotest_common.sh@10 -- # set +x 00:19:33.453 23:18:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.453 23:18:39 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:33.711 [2024-11-02 23:18:39.280433] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1539090/0x153d580) succeed. 00:19:33.712 [2024-11-02 23:18:39.289642] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x153a680/0x157ec20) succeed. 00:19:33.712 23:18:39 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.973 23:18:39 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:33.973 23:18:39 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.235 23:18:39 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:34.235 23:18:39 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.493 23:18:40 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:34.493 23:18:40 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:34.751 23:18:40 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:34.751 23:18:40 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:34.751 23:18:40 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.010 23:18:40 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:35.010 23:18:40 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.270 23:18:40 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:35.270 23:18:40 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.529 23:18:41 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:35.529 23:18:41 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:35.529 23:18:41 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:35.791 23:18:41 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:35.791 23:18:41 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.090 23:18:41 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:36.090 23:18:41 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:36.090 23:18:41 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:36.399 [2024-11-02 23:18:41.976480] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:36.399 23:18:42 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:36.658 23:18:42 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:36.658 23:18:42 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:38.036 23:18:43 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:38.036 23:18:43 -- common/autotest_common.sh@1177 -- # local i=0 00:19:38.036 23:18:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:38.036 23:18:43 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:38.036 23:18:43 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:38.036 23:18:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:39.979 23:18:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:39.979 23:18:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:39.979 23:18:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:39.979 23:18:45 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:39.979 23:18:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:39.979 23:18:45 -- common/autotest_common.sh@1187 -- # return 0 00:19:39.979 23:18:45 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:39.979 [global] 00:19:39.979 thread=1 00:19:39.979 invalidate=1 00:19:39.979 rw=write 00:19:39.979 time_based=1 00:19:39.979 runtime=1 00:19:39.979 ioengine=libaio 00:19:39.979 direct=1 00:19:39.979 bs=4096 00:19:39.979 iodepth=1 00:19:39.979 norandommap=0 00:19:39.979 numjobs=1 00:19:39.979 00:19:39.979 verify_dump=1 00:19:39.979 verify_backlog=512 00:19:39.979 verify_state_save=0 00:19:39.979 do_verify=1 00:19:39.979 verify=crc32c-intel 00:19:39.979 [job0] 00:19:39.979 filename=/dev/nvme0n1 00:19:39.979 [job1] 00:19:39.979 filename=/dev/nvme0n2 00:19:39.979 [job2] 00:19:39.979 filename=/dev/nvme0n3 00:19:39.979 [job3] 00:19:39.979 filename=/dev/nvme0n4 00:19:39.979 Could not set queue depth (nvme0n1) 00:19:39.979 Could not set queue depth (nvme0n2) 00:19:39.979 Could not set queue depth (nvme0n3) 00:19:39.979 Could not set queue depth (nvme0n4) 00:19:40.244 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.244 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.244 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.244 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.244 fio-3.35 00:19:40.244 Starting 4 threads 00:19:41.656 00:19:41.656 job0: (groupid=0, jobs=1): err= 0: pid=647270: Sat Nov 2 23:18:47 2024 00:19:41.656 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:19:41.656 slat (nsec): min=8328, max=29115, avg=8935.38, stdev=810.34 00:19:41.656 clat (usec): min=62, max=191, avg=95.13, stdev=23.50 00:19:41.656 lat (usec): min=70, max=200, avg=104.07, stdev=23.57 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:19:41.656 | 30.00th=[ 76], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 108], 00:19:41.656 | 70.00th=[ 116], 80.00th=[ 122], 90.00th=[ 128], 95.00th=[ 131], 00:19:41.656 | 99.00th=[ 141], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 178], 00:19:41.656 | 99.99th=[ 192] 00:19:41.656 write: IOPS=4728, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec); 0 zone resets 00:19:41.656 slat (nsec): min=10470, max=40322, avg=11412.67, stdev=966.71 00:19:41.656 clat (usec): min=60, max=173, avg=93.53, stdev=20.81 00:19:41.656 lat (usec): min=71, max=184, avg=104.94, stdev=20.85 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:19:41.656 | 30.00th=[ 75], 40.00th=[ 79], 50.00th=[ 97], 60.00th=[ 105], 00:19:41.656 | 70.00th=[ 111], 80.00th=[ 115], 90.00th=[ 120], 95.00th=[ 125], 00:19:41.656 | 99.00th=[ 137], 99.50th=[ 149], 99.90th=[ 157], 99.95th=[ 161], 00:19:41.656 | 99.99th=[ 174] 00:19:41.656 bw ( KiB/s): min=16351, max=16351, per=22.57%, avg=16351.00, stdev= 0.00, samples=1 00:19:41.656 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:19:41.656 lat (usec) : 100=54.95%, 250=45.05% 00:19:41.656 cpu : usr=7.70%, sys=12.10%, ctx=9341, majf=0, minf=1 00:19:41.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 issued rwts: total=4608,4733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.656 job1: (groupid=0, jobs=1): err= 0: pid=647285: Sat Nov 2 23:18:47 2024 00:19:41.656 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:19:41.656 slat (nsec): min=8396, max=28377, avg=9095.11, stdev=896.28 00:19:41.656 clat (usec): min=64, max=189, avg=89.16, stdev=20.47 00:19:41.656 lat (usec): min=73, max=199, avg=98.25, stdev=20.53 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 78], 00:19:41.656 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:19:41.656 | 70.00th=[ 88], 80.00th=[ 92], 90.00th=[ 122], 95.00th=[ 143], 00:19:41.656 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 186], 99.95th=[ 190], 00:19:41.656 | 99.99th=[ 190] 00:19:41.656 write: IOPS=4988, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1001msec); 0 zone resets 00:19:41.656 slat (nsec): min=10304, max=44083, avg=11990.51, stdev=2340.17 00:19:41.656 clat (usec): min=63, max=244, avg=92.54, stdev=27.98 00:19:41.656 lat (usec): min=74, max=257, avg=104.53, stdev=28.93 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 76], 00:19:41.656 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 84], 00:19:41.656 | 70.00th=[ 87], 80.00th=[ 116], 90.00th=[ 143], 95.00th=[ 155], 00:19:41.656 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 202], 99.95th=[ 204], 00:19:41.656 | 99.99th=[ 245] 00:19:41.656 bw ( KiB/s): min=21748, max=21748, per=30.02%, avg=21748.00, stdev= 0.00, samples=1 00:19:41.656 iops : min= 5437, max= 5437, avg=5437.00, stdev= 0.00, samples=1 00:19:41.656 lat (usec) : 100=82.27%, 250=17.73% 00:19:41.656 cpu : usr=7.50%, sys=13.10%, ctx=9601, majf=0, minf=1 00:19:41.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 issued rwts: total=4608,4993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.656 job2: (groupid=0, jobs=1): err= 0: pid=647308: Sat Nov 2 23:18:47 2024 00:19:41.656 read: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1000msec) 00:19:41.656 slat (nsec): min=8569, max=31276, avg=10134.70, stdev=2504.35 00:19:41.656 clat (usec): min=67, max=204, avg=125.00, stdev=16.21 00:19:41.656 lat (usec): min=87, max=216, avg=135.13, stdev=16.82 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 87], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 115], 00:19:41.656 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:19:41.656 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 155], 00:19:41.656 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 198], 00:19:41.656 | 99.99th=[ 204] 00:19:41.656 write: IOPS=3795, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1000msec); 0 zone resets 00:19:41.656 slat (nsec): min=10469, max=47889, avg=12447.91, stdev=2641.52 00:19:41.656 clat (usec): min=71, max=199, avg=118.40, stdev=18.20 00:19:41.656 lat (usec): min=82, max=212, avg=130.85, stdev=19.04 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 83], 5.00th=[ 98], 10.00th=[ 102], 20.00th=[ 106], 00:19:41.656 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 118], 00:19:41.656 | 70.00th=[ 123], 80.00th=[ 130], 90.00th=[ 143], 95.00th=[ 155], 00:19:41.656 | 99.00th=[ 184], 99.50th=[ 186], 99.90th=[ 194], 99.95th=[ 196], 00:19:41.656 | 99.99th=[ 200] 00:19:41.656 bw ( KiB/s): min=16351, max=16351, per=22.57%, avg=16351.00, stdev= 0.00, samples=1 00:19:41.656 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:19:41.656 lat (usec) : 100=5.68%, 250=94.32% 00:19:41.656 cpu : usr=6.00%, sys=10.30%, ctx=7380, majf=0, minf=1 00:19:41.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 issued rwts: total=3584,3795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.656 job3: (groupid=0, jobs=1): err= 0: pid=647316: Sat Nov 2 23:18:47 2024 00:19:41.656 read: IOPS=4150, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1001msec) 00:19:41.656 slat (nsec): min=8593, max=26625, avg=9471.68, stdev=1450.78 00:19:41.656 clat (usec): min=70, max=198, avg=103.29, stdev=24.15 00:19:41.656 lat (usec): min=78, max=213, avg=112.76, stdev=24.41 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:19:41.656 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 97], 00:19:41.656 | 70.00th=[ 114], 80.00th=[ 128], 90.00th=[ 143], 95.00th=[ 151], 00:19:41.656 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 198], 99.95th=[ 198], 00:19:41.656 | 99.99th=[ 200] 00:19:41.656 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:19:41.656 slat (nsec): min=10646, max=44047, avg=12051.67, stdev=1956.18 00:19:41.656 clat (usec): min=67, max=191, avg=98.42, stdev=24.30 00:19:41.656 lat (usec): min=78, max=202, avg=110.48, stdev=24.65 00:19:41.656 clat percentiles (usec): 00:19:41.656 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:19:41.656 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 91], 00:19:41.656 | 70.00th=[ 109], 80.00th=[ 124], 90.00th=[ 137], 95.00th=[ 147], 00:19:41.656 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 184], 00:19:41.656 | 99.99th=[ 192] 00:19:41.656 bw ( KiB/s): min=20439, max=20439, per=28.21%, avg=20439.00, stdev= 0.00, samples=1 00:19:41.656 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:19:41.656 lat (usec) : 100=65.72%, 250=34.28% 00:19:41.656 cpu : usr=6.80%, sys=12.30%, ctx=8763, majf=0, minf=1 00:19:41.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.656 issued rwts: total=4155,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.656 00:19:41.656 Run status group 0 (all jobs): 00:19:41.656 READ: bw=66.2MiB/s (69.4MB/s), 14.0MiB/s-18.0MiB/s (14.7MB/s-18.9MB/s), io=66.2MiB (69.4MB), run=1000-1001msec 00:19:41.656 WRITE: bw=70.7MiB/s (74.2MB/s), 14.8MiB/s-19.5MiB/s (15.5MB/s-20.4MB/s), io=70.8MiB (74.3MB), run=1000-1001msec 00:19:41.656 00:19:41.656 Disk stats (read/write): 00:19:41.656 nvme0n1: ios=3633/3639, merge=0/0, ticks=338/325, in_queue=663, util=82.25% 00:19:41.656 nvme0n2: ios=3669/4096, merge=0/0, ticks=305/340, in_queue=645, util=83.66% 00:19:41.656 nvme0n3: ios=2973/3072, merge=0/0, ticks=338/343, in_queue=681, util=87.72% 00:19:41.656 nvme0n4: ios=3584/3840, merge=0/0, ticks=341/323, in_queue=664, util=89.16% 00:19:41.656 23:18:47 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:41.656 [global] 00:19:41.656 thread=1 00:19:41.656 invalidate=1 00:19:41.656 rw=randwrite 00:19:41.656 time_based=1 00:19:41.656 runtime=1 00:19:41.656 ioengine=libaio 00:19:41.656 direct=1 00:19:41.656 bs=4096 00:19:41.656 iodepth=1 00:19:41.656 norandommap=0 00:19:41.656 numjobs=1 00:19:41.656 00:19:41.656 verify_dump=1 00:19:41.656 verify_backlog=512 00:19:41.656 verify_state_save=0 00:19:41.656 do_verify=1 00:19:41.656 verify=crc32c-intel 00:19:41.656 [job0] 00:19:41.656 filename=/dev/nvme0n1 00:19:41.656 [job1] 00:19:41.656 filename=/dev/nvme0n2 00:19:41.656 [job2] 00:19:41.656 filename=/dev/nvme0n3 00:19:41.656 [job3] 00:19:41.656 filename=/dev/nvme0n4 00:19:41.656 Could not set queue depth (nvme0n1) 00:19:41.656 Could not set queue depth (nvme0n2) 00:19:41.656 Could not set queue depth (nvme0n3) 00:19:41.656 Could not set queue depth (nvme0n4) 00:19:41.922 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.922 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.922 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.922 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.922 fio-3.35 00:19:41.922 Starting 4 threads 00:19:43.307 00:19:43.307 job0: (groupid=0, jobs=1): err= 0: pid=647718: Sat Nov 2 23:18:48 2024 00:19:43.307 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:19:43.307 slat (nsec): min=8300, max=23233, avg=8947.78, stdev=672.81 00:19:43.307 clat (usec): min=66, max=160, avg=84.97, stdev= 6.74 00:19:43.307 lat (usec): min=75, max=169, avg=93.92, stdev= 6.76 00:19:43.307 clat percentiles (usec): 00:19:43.307 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:19:43.307 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:19:43.307 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 94], 95.00th=[ 97], 00:19:43.307 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 111], 99.95th=[ 113], 00:19:43.307 | 99.99th=[ 161] 00:19:43.307 write: IOPS=5298, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1001msec); 0 zone resets 00:19:43.307 slat (nsec): min=10314, max=44520, avg=11169.36, stdev=897.34 00:19:43.307 clat (usec): min=64, max=151, avg=81.65, stdev= 7.83 00:19:43.307 lat (usec): min=75, max=162, avg=92.82, stdev= 7.89 00:19:43.307 clat percentiles (usec): 00:19:43.307 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:19:43.307 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:19:43.307 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 91], 95.00th=[ 96], 00:19:43.307 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 133], 00:19:43.307 | 99.99th=[ 151] 00:19:43.307 bw ( KiB/s): min=21336, max=21336, per=28.68%, avg=21336.00, stdev= 0.00, samples=1 00:19:43.307 iops : min= 5334, max= 5334, avg=5334.00, stdev= 0.00, samples=1 00:19:43.307 lat (usec) : 100=97.37%, 250=2.63% 00:19:43.307 cpu : usr=9.30%, sys=12.80%, ctx=10424, majf=0, minf=1 00:19:43.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:43.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.307 issued rwts: total=5120,5304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:43.307 job1: (groupid=0, jobs=1): err= 0: pid=647730: Sat Nov 2 23:18:48 2024 00:19:43.307 read: IOPS=3682, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec) 00:19:43.307 slat (nsec): min=8317, max=28385, avg=9005.46, stdev=841.33 00:19:43.307 clat (usec): min=75, max=295, avg=120.39, stdev=10.84 00:19:43.307 lat (usec): min=84, max=304, avg=129.39, stdev=10.84 00:19:43.307 clat percentiles (usec): 00:19:43.307 | 1.00th=[ 98], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 113], 00:19:43.307 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:19:43.307 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 137], 00:19:43.307 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 182], 00:19:43.307 | 99.99th=[ 297] 00:19:43.307 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:43.307 slat (nsec): min=10124, max=45526, avg=11160.44, stdev=1134.79 00:19:43.307 clat (usec): min=67, max=177, avg=111.93, stdev= 9.90 00:19:43.307 lat (usec): min=78, max=188, avg=123.09, stdev= 9.94 00:19:43.307 clat percentiles (usec): 00:19:43.307 | 1.00th=[ 91], 5.00th=[ 97], 10.00th=[ 100], 20.00th=[ 104], 00:19:43.307 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 115], 00:19:43.307 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 128], 00:19:43.307 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 161], 99.95th=[ 163], 00:19:43.307 | 99.99th=[ 178] 00:19:43.307 bw ( KiB/s): min=16384, max=16384, per=22.02%, avg=16384.00, stdev= 0.00, samples=1 00:19:43.307 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:43.307 lat (usec) : 100=6.13%, 250=93.86%, 500=0.01% 00:19:43.307 cpu : usr=5.80%, sys=10.70%, ctx=7782, majf=0, minf=1 00:19:43.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:43.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.307 issued rwts: total=3686,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:43.307 job2: (groupid=0, jobs=1): err= 0: pid=647751: Sat Nov 2 23:18:48 2024 00:19:43.307 read: IOPS=3706, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1001msec) 00:19:43.307 slat (nsec): min=9155, max=23281, avg=9591.67, stdev=873.24 00:19:43.307 clat (usec): min=78, max=292, avg=118.85, stdev=10.36 00:19:43.307 lat (usec): min=87, max=301, avg=128.44, stdev=10.38 00:19:43.307 clat percentiles (usec): 00:19:43.307 | 1.00th=[ 88], 5.00th=[ 102], 10.00th=[ 109], 20.00th=[ 113], 00:19:43.307 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:19:43.307 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 135], 00:19:43.307 | 99.00th=[ 143], 99.50th=[ 149], 99.90th=[ 161], 99.95th=[ 172], 00:19:43.307 | 99.99th=[ 293] 00:19:43.308 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:43.308 slat (nsec): min=11448, max=46080, avg=11966.72, stdev=1331.49 00:19:43.308 clat (usec): min=74, max=156, avg=110.71, stdev= 8.74 00:19:43.308 lat (usec): min=86, max=168, avg=122.68, stdev= 8.76 00:19:43.308 clat percentiles (usec): 00:19:43.308 | 1.00th=[ 86], 5.00th=[ 97], 10.00th=[ 101], 20.00th=[ 104], 00:19:43.308 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:19:43.308 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 125], 00:19:43.308 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 147], 99.95th=[ 151], 00:19:43.308 | 99.99th=[ 157] 00:19:43.308 bw ( KiB/s): min=16384, max=16384, per=22.02%, avg=16384.00, stdev= 0.00, samples=1 00:19:43.308 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:43.308 lat (usec) : 100=6.82%, 250=93.17%, 500=0.01% 00:19:43.308 cpu : usr=6.20%, sys=12.80%, ctx=7806, majf=0, minf=1 00:19:43.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:43.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.308 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:43.308 job3: (groupid=0, jobs=1): err= 0: pid=647757: Sat Nov 2 23:18:48 2024 00:19:43.308 read: IOPS=4628, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1001msec) 00:19:43.308 slat (nsec): min=8453, max=31473, avg=9069.36, stdev=893.53 00:19:43.308 clat (usec): min=68, max=182, avg=91.71, stdev= 6.99 00:19:43.308 lat (usec): min=81, max=191, avg=100.78, stdev= 7.04 00:19:43.308 clat percentiles (usec): 00:19:43.308 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:19:43.308 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 93], 00:19:43.308 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 101], 95.00th=[ 104], 00:19:43.308 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 121], 00:19:43.308 | 99.99th=[ 182] 00:19:43.308 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:43.308 slat (nsec): min=10200, max=40190, avg=11329.64, stdev=969.66 00:19:43.308 clat (usec): min=70, max=146, avg=87.99, stdev= 6.90 00:19:43.308 lat (usec): min=81, max=157, avg=99.32, stdev= 7.00 00:19:43.308 clat percentiles (usec): 00:19:43.308 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:19:43.308 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 89], 00:19:43.308 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 101], 00:19:43.308 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 122], 00:19:43.308 | 99.99th=[ 147] 00:19:43.308 bw ( KiB/s): min=20480, max=20480, per=27.53%, avg=20480.00, stdev= 0.00, samples=1 00:19:43.308 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:43.308 lat (usec) : 100=91.52%, 250=8.48% 00:19:43.308 cpu : usr=7.50%, sys=13.30%, ctx=9753, majf=0, minf=1 00:19:43.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:43.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.308 issued rwts: total=4633,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:43.308 00:19:43.308 Run status group 0 (all jobs): 00:19:43.308 READ: bw=66.9MiB/s (70.2MB/s), 14.4MiB/s-20.0MiB/s (15.1MB/s-20.9MB/s), io=67.0MiB (70.2MB), run=1001-1001msec 00:19:43.308 WRITE: bw=72.6MiB/s (76.2MB/s), 16.0MiB/s-20.7MiB/s (16.8MB/s-21.7MB/s), io=72.7MiB (76.3MB), run=1001-1001msec 00:19:43.308 00:19:43.308 Disk stats (read/write): 00:19:43.308 nvme0n1: ios=4173/4608, merge=0/0, ticks=333/335, in_queue=668, util=84.25% 00:19:43.308 nvme0n2: ios=3072/3406, merge=0/0, ticks=321/360, in_queue=681, util=85.20% 00:19:43.308 nvme0n3: ios=3072/3404, merge=0/0, ticks=338/343, in_queue=681, util=88.36% 00:19:43.308 nvme0n4: ios=4035/4096, merge=0/0, ticks=332/329, in_queue=661, util=89.50% 00:19:43.308 23:18:48 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:43.308 [global] 00:19:43.308 thread=1 00:19:43.308 invalidate=1 00:19:43.308 rw=write 00:19:43.308 time_based=1 00:19:43.308 runtime=1 00:19:43.308 ioengine=libaio 00:19:43.308 direct=1 00:19:43.308 bs=4096 00:19:43.308 iodepth=128 00:19:43.308 norandommap=0 00:19:43.308 numjobs=1 00:19:43.308 00:19:43.308 verify_dump=1 00:19:43.308 verify_backlog=512 00:19:43.308 verify_state_save=0 00:19:43.308 do_verify=1 00:19:43.308 verify=crc32c-intel 00:19:43.308 [job0] 00:19:43.308 filename=/dev/nvme0n1 00:19:43.308 [job1] 00:19:43.308 filename=/dev/nvme0n2 00:19:43.308 [job2] 00:19:43.308 filename=/dev/nvme0n3 00:19:43.308 [job3] 00:19:43.308 filename=/dev/nvme0n4 00:19:43.308 Could not set queue depth (nvme0n1) 00:19:43.308 Could not set queue depth (nvme0n2) 00:19:43.308 Could not set queue depth (nvme0n3) 00:19:43.308 Could not set queue depth (nvme0n4) 00:19:43.566 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.566 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.566 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.566 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.566 fio-3.35 00:19:43.566 Starting 4 threads 00:19:44.943 00:19:44.943 job0: (groupid=0, jobs=1): err= 0: pid=648147: Sat Nov 2 23:18:50 2024 00:19:44.943 read: IOPS=4025, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:19:44.943 slat (nsec): min=1997, max=3638.8k, avg=125092.25, stdev=506369.84 00:19:44.943 clat (usec): min=2860, max=19686, avg=16115.08, stdev=3436.02 00:19:44.943 lat (usec): min=3486, max=21225, avg=16240.18, stdev=3425.64 00:19:44.943 clat percentiles (usec): 00:19:44.943 | 1.00th=[ 7242], 5.00th=[ 7570], 10.00th=[ 9503], 20.00th=[14615], 00:19:44.943 | 30.00th=[14877], 40.00th=[15533], 50.00th=[18220], 60.00th=[18220], 00:19:44.943 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[18744], 00:19:44.943 | 99.00th=[19530], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:19:44.943 | 99.99th=[19792] 00:19:44.943 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:44.943 slat (usec): min=2, max=3812, avg=116.56, stdev=460.40 00:19:44.943 clat (usec): min=6990, max=18946, avg=15108.05, stdev=3310.08 00:19:44.943 lat (usec): min=7002, max=18957, avg=15224.61, stdev=3304.86 00:19:44.943 clat percentiles (usec): 00:19:44.943 | 1.00th=[ 7177], 5.00th=[ 7308], 10.00th=[ 7898], 20.00th=[13960], 00:19:44.943 | 30.00th=[14222], 40.00th=[14484], 50.00th=[16188], 60.00th=[17433], 00:19:44.943 | 70.00th=[17695], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:19:44.943 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:19:44.943 | 99.99th=[19006] 00:19:44.943 bw ( KiB/s): min=13224, max=19544, per=16.73%, avg=16384.00, stdev=4468.91, samples=2 00:19:44.943 iops : min= 3306, max= 4886, avg=4096.00, stdev=1117.23, samples=2 00:19:44.943 lat (msec) : 4=0.29%, 10=11.10%, 20=88.61% 00:19:44.943 cpu : usr=1.20%, sys=3.29%, ctx=1909, majf=0, minf=1 00:19:44.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:44.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.943 issued rwts: total=4042,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.943 job1: (groupid=0, jobs=1): err= 0: pid=648159: Sat Nov 2 23:18:50 2024 00:19:44.944 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(43.5MiB/1001msec) 00:19:44.944 slat (usec): min=2, max=1321, avg=44.26, stdev=164.37 00:19:44.944 clat (usec): min=758, max=8374, avg=5775.97, stdev=762.04 00:19:44.944 lat (usec): min=1515, max=8378, avg=5820.23, stdev=750.22 00:19:44.944 clat percentiles (usec): 00:19:44.944 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5342], 00:19:44.944 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5473], 60.00th=[ 5538], 00:19:44.944 | 70.00th=[ 5669], 80.00th=[ 6587], 90.00th=[ 6849], 95.00th=[ 7046], 00:19:44.944 | 99.00th=[ 8029], 99.50th=[ 8094], 99.90th=[ 8356], 99.95th=[ 8356], 00:19:44.944 | 99.99th=[ 8356] 00:19:44.944 write: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(44.0MiB/1001msec); 0 zone resets 00:19:44.944 slat (usec): min=2, max=1583, avg=42.16, stdev=154.85 00:19:44.944 clat (usec): min=4144, max=8387, avg=5540.44, stdev=785.79 00:19:44.944 lat (usec): min=4788, max=8396, avg=5582.60, stdev=776.42 00:19:44.944 clat percentiles (usec): 00:19:44.944 | 1.00th=[ 4359], 5.00th=[ 4817], 10.00th=[ 4883], 20.00th=[ 5014], 00:19:44.944 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5145], 60.00th=[ 5276], 00:19:44.944 | 70.00th=[ 5866], 80.00th=[ 6325], 90.00th=[ 6587], 95.00th=[ 7111], 00:19:44.944 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[ 8356], 99.95th=[ 8356], 00:19:44.944 | 99.99th=[ 8356] 00:19:44.944 bw ( KiB/s): min=40960, max=40960, per=41.83%, avg=40960.00, stdev= 0.00, samples=1 00:19:44.944 iops : min=10240, max=10240, avg=10240.00, stdev= 0.00, samples=1 00:19:44.944 lat (usec) : 1000=0.01% 00:19:44.944 lat (msec) : 2=0.05%, 4=0.24%, 10=99.71% 00:19:44.944 cpu : usr=3.80%, sys=7.50%, ctx=1482, majf=0, minf=2 00:19:44.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:44.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.944 issued rwts: total=11147,11264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.944 job2: (groupid=0, jobs=1): err= 0: pid=648187: Sat Nov 2 23:18:50 2024 00:19:44.944 read: IOPS=3872, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1004msec) 00:19:44.944 slat (usec): min=2, max=2800, avg=128.13, stdev=345.56 00:19:44.944 clat (usec): min=2873, max=21236, avg=16308.90, stdev=2757.24 00:19:44.944 lat (usec): min=3584, max=21239, avg=16437.03, stdev=2754.46 00:19:44.944 clat percentiles (usec): 00:19:44.944 | 1.00th=[ 7504], 5.00th=[ 9634], 10.00th=[13566], 20.00th=[14484], 00:19:44.944 | 30.00th=[14877], 40.00th=[15533], 50.00th=[17695], 60.00th=[17957], 00:19:44.944 | 70.00th=[18220], 80.00th=[18482], 90.00th=[18744], 95.00th=[18744], 00:19:44.944 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21103], 99.95th=[21103], 00:19:44.944 | 99.99th=[21365] 00:19:44.944 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:44.944 slat (usec): min=2, max=2407, avg=119.32, stdev=326.49 00:19:44.944 clat (usec): min=6739, max=19447, avg=15520.57, stdev=2956.42 00:19:44.944 lat (usec): min=6749, max=19492, avg=15639.89, stdev=2962.09 00:19:44.944 clat percentiles (usec): 00:19:44.944 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[14091], 00:19:44.944 | 30.00th=[14222], 40.00th=[14615], 50.00th=[16909], 60.00th=[17433], 00:19:44.944 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:19:44.944 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:19:44.944 | 99.99th=[19530] 00:19:44.944 bw ( KiB/s): min=14464, max=18304, per=16.73%, avg=16384.00, stdev=2715.29, samples=2 00:19:44.944 iops : min= 3616, max= 4576, avg=4096.00, stdev=678.82, samples=2 00:19:44.944 lat (msec) : 4=0.03%, 10=9.48%, 20=90.34%, 50=0.15% 00:19:44.944 cpu : usr=1.60%, sys=3.49%, ctx=2401, majf=0, minf=1 00:19:44.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:44.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.944 issued rwts: total=3888,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.944 job3: (groupid=0, jobs=1): err= 0: pid=648195: Sat Nov 2 23:18:50 2024 00:19:44.944 read: IOPS=5006, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1004msec) 00:19:44.944 slat (usec): min=2, max=2636, avg=102.31, stdev=343.04 00:19:44.944 clat (usec): min=2862, max=20421, avg=13016.74, stdev=5134.95 00:19:44.944 lat (usec): min=5076, max=20424, avg=13119.05, stdev=5167.17 00:19:44.944 clat percentiles (usec): 00:19:44.944 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 7701], 20.00th=[ 7963], 00:19:44.944 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[17957], 00:19:44.944 | 70.00th=[18220], 80.00th=[18482], 90.00th=[18744], 95.00th=[18744], 00:19:44.944 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20317], 99.95th=[20317], 00:19:44.944 | 99.99th=[20317] 00:19:44.944 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:19:44.944 slat (usec): min=2, max=2838, avg=91.44, stdev=312.95 00:19:44.944 clat (usec): min=5866, max=18702, avg=12012.43, stdev=4772.74 00:19:44.944 lat (usec): min=5878, max=18709, avg=12103.88, stdev=4803.73 00:19:44.944 clat percentiles (usec): 00:19:44.944 | 1.00th=[ 6915], 5.00th=[ 7242], 10.00th=[ 7373], 20.00th=[ 7570], 00:19:44.944 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[16450], 00:19:44.944 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:19:44.944 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:19:44.944 | 99.99th=[18744] 00:19:44.944 bw ( KiB/s): min=13544, max=27416, per=20.92%, avg=20480.00, stdev=9808.99, samples=2 00:19:44.944 iops : min= 3386, max= 6854, avg=5120.00, stdev=2452.25, samples=2 00:19:44.944 lat (msec) : 4=0.01%, 10=53.65%, 20=46.27%, 50=0.07% 00:19:44.944 cpu : usr=2.09%, sys=4.49%, ctx=2413, majf=0, minf=1 00:19:44.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:44.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.944 issued rwts: total=5027,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.944 00:19:44.944 Run status group 0 (all jobs): 00:19:44.944 READ: bw=93.8MiB/s (98.3MB/s), 15.1MiB/s-43.5MiB/s (15.9MB/s-45.6MB/s), io=94.2MiB (98.7MB), run=1001-1004msec 00:19:44.944 WRITE: bw=95.6MiB/s (100MB/s), 15.9MiB/s-44.0MiB/s (16.7MB/s-46.1MB/s), io=96.0MiB (101MB), run=1001-1004msec 00:19:44.944 00:19:44.944 Disk stats (read/write): 00:19:44.944 nvme0n1: ios=3357/3584, merge=0/0, ticks=12970/13034, in_queue=26004, util=83.57% 00:19:44.944 nvme0n2: ios=8992/9216, merge=0/0, ticks=17191/16681, in_queue=33872, util=84.68% 00:19:44.944 nvme0n3: ios=3165/3584, merge=0/0, ticks=12995/13802, in_queue=26797, util=88.10% 00:19:44.944 nvme0n4: ios=4299/4608, merge=0/0, ticks=13100/12771, in_queue=25871, util=89.33% 00:19:44.944 23:18:50 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:44.944 [global] 00:19:44.944 thread=1 00:19:44.944 invalidate=1 00:19:44.944 rw=randwrite 00:19:44.944 time_based=1 00:19:44.944 runtime=1 00:19:44.944 ioengine=libaio 00:19:44.944 direct=1 00:19:44.944 bs=4096 00:19:44.944 iodepth=128 00:19:44.944 norandommap=0 00:19:44.944 numjobs=1 00:19:44.944 00:19:44.944 verify_dump=1 00:19:44.944 verify_backlog=512 00:19:44.944 verify_state_save=0 00:19:44.944 do_verify=1 00:19:44.944 verify=crc32c-intel 00:19:44.944 [job0] 00:19:44.944 filename=/dev/nvme0n1 00:19:44.944 [job1] 00:19:44.944 filename=/dev/nvme0n2 00:19:44.944 [job2] 00:19:44.944 filename=/dev/nvme0n3 00:19:44.944 [job3] 00:19:44.944 filename=/dev/nvme0n4 00:19:44.944 Could not set queue depth (nvme0n1) 00:19:44.944 Could not set queue depth (nvme0n2) 00:19:44.944 Could not set queue depth (nvme0n3) 00:19:44.944 Could not set queue depth (nvme0n4) 00:19:45.202 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.202 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.202 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.202 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:45.202 fio-3.35 00:19:45.202 Starting 4 threads 00:19:46.590 00:19:46.591 job0: (groupid=0, jobs=1): err= 0: pid=648586: Sat Nov 2 23:18:51 2024 00:19:46.591 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:19:46.591 slat (usec): min=2, max=3144, avg=132.07, stdev=387.77 00:19:46.591 clat (usec): min=13495, max=20648, avg=17063.54, stdev=671.82 00:19:46.591 lat (usec): min=15187, max=20658, avg=17195.61, stdev=663.60 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[15401], 5.00th=[16057], 10.00th=[16319], 20.00th=[16581], 00:19:46.591 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17171], 60.00th=[17171], 00:19:46.591 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:19:46.591 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20055], 99.95th=[20317], 00:19:46.591 | 99.99th=[20579] 00:19:46.591 write: IOPS=3968, BW=15.5MiB/s (16.3MB/s)(15.5MiB/1003msec); 0 zone resets 00:19:46.591 slat (usec): min=2, max=3146, avg=128.01, stdev=387.29 00:19:46.591 clat (usec): min=1889, max=19851, avg=16477.05, stdev=1762.03 00:19:46.591 lat (usec): min=2693, max=19861, avg=16605.06, stdev=1760.94 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[ 6783], 5.00th=[14484], 10.00th=[15795], 20.00th=[16188], 00:19:46.591 | 30.00th=[16319], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:19:46.591 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:19:46.591 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19792], 99.95th=[19792], 00:19:46.591 | 99.99th=[19792] 00:19:46.591 bw ( KiB/s): min=14440, max=16384, per=14.30%, avg=15412.00, stdev=1374.62, samples=2 00:19:46.591 iops : min= 3610, max= 4096, avg=3853.00, stdev=343.65, samples=2 00:19:46.591 lat (msec) : 2=0.01%, 4=0.36%, 10=0.42%, 20=99.14%, 50=0.07% 00:19:46.591 cpu : usr=2.00%, sys=4.29%, ctx=896, majf=0, minf=1 00:19:46.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:46.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.591 issued rwts: total=3584,3980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.591 job1: (groupid=0, jobs=1): err= 0: pid=648598: Sat Nov 2 23:18:51 2024 00:19:46.591 read: IOPS=9611, BW=37.5MiB/s (39.4MB/s)(37.6MiB/1001msec) 00:19:46.591 slat (usec): min=2, max=1538, avg=50.68, stdev=178.49 00:19:46.591 clat (usec): min=385, max=8106, avg=6678.06, stdev=527.00 00:19:46.591 lat (usec): min=1089, max=8110, avg=6728.74, stdev=530.76 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:19:46.591 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6783], 00:19:46.591 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7373], 00:19:46.591 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[ 8029], 99.95th=[ 8029], 00:19:46.591 | 99.99th=[ 8094] 00:19:46.591 write: IOPS=9718, BW=38.0MiB/s (39.8MB/s)(38.0MiB/1001msec); 0 zone resets 00:19:46.591 slat (usec): min=2, max=1219, avg=48.51, stdev=166.23 00:19:46.591 clat (usec): min=4065, max=11388, avg=6407.87, stdev=590.19 00:19:46.591 lat (usec): min=4075, max=11398, avg=6456.38, stdev=594.76 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[ 5538], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6063], 00:19:46.591 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6390], 00:19:46.591 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7046], 00:19:46.591 | 99.00th=[ 9503], 99.50th=[10421], 99.90th=[11338], 99.95th=[11338], 00:19:46.591 | 99.99th=[11338] 00:19:46.591 bw ( KiB/s): min=39616, max=39616, per=36.76%, avg=39616.00, stdev= 0.00, samples=1 00:19:46.591 iops : min= 9904, max= 9904, avg=9904.00, stdev= 0.00, samples=1 00:19:46.591 lat (usec) : 500=0.01% 00:19:46.591 lat (msec) : 2=0.13%, 4=0.18%, 10=99.27%, 20=0.42% 00:19:46.591 cpu : usr=5.20%, sys=8.00%, ctx=1359, majf=0, minf=1 00:19:46.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:46.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.591 issued rwts: total=9621,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.591 job2: (groupid=0, jobs=1): err= 0: pid=648620: Sat Nov 2 23:18:51 2024 00:19:46.591 read: IOPS=7799, BW=30.5MiB/s (31.9MB/s)(30.5MiB/1002msec) 00:19:46.591 slat (usec): min=2, max=1116, avg=61.73, stdev=219.50 00:19:46.591 clat (usec): min=497, max=9337, avg=8026.84, stdev=612.57 00:19:46.591 lat (usec): min=1361, max=9561, avg=8088.57, stdev=628.99 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[ 5538], 5.00th=[ 7504], 10.00th=[ 7635], 20.00th=[ 7832], 00:19:46.591 | 30.00th=[ 7898], 40.00th=[ 7963], 50.00th=[ 8029], 60.00th=[ 8094], 00:19:46.591 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8848], 00:19:46.591 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[ 9241], 99.95th=[ 9372], 00:19:46.591 | 99.99th=[ 9372] 00:19:46.591 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 00:19:46.591 slat (usec): min=2, max=1235, avg=59.10, stdev=206.85 00:19:46.591 clat (usec): min=6404, max=11749, avg=7798.28, stdev=597.12 00:19:46.591 lat (usec): min=6750, max=11753, avg=7857.38, stdev=618.55 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[ 7046], 5.00th=[ 7242], 10.00th=[ 7308], 20.00th=[ 7439], 00:19:46.591 | 30.00th=[ 7570], 40.00th=[ 7635], 50.00th=[ 7701], 60.00th=[ 7767], 00:19:46.591 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8586], 00:19:46.591 | 99.00th=[11076], 99.50th=[11338], 99.90th=[11731], 99.95th=[11731], 00:19:46.591 | 99.99th=[11731] 00:19:46.591 bw ( KiB/s): min=32768, max=32768, per=30.41%, avg=32768.00, stdev= 0.00, samples=1 00:19:46.591 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:19:46.591 lat (usec) : 500=0.01% 00:19:46.591 lat (msec) : 2=0.11%, 4=0.19%, 10=98.66%, 20=1.04% 00:19:46.591 cpu : usr=3.50%, sys=7.89%, ctx=1229, majf=0, minf=1 00:19:46.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:46.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.591 issued rwts: total=7815,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.591 job3: (groupid=0, jobs=1): err= 0: pid=648628: Sat Nov 2 23:18:51 2024 00:19:46.591 read: IOPS=4839, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:19:46.591 slat (usec): min=2, max=3048, avg=99.97, stdev=323.34 00:19:46.591 clat (usec): min=1933, max=16646, avg=12939.89, stdev=1012.15 00:19:46.591 lat (usec): min=3801, max=16648, avg=13039.86, stdev=1025.43 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[ 8586], 5.00th=[12125], 10.00th=[12256], 20.00th=[12649], 00:19:46.591 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:19:46.591 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:19:46.591 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15401], 99.95th=[15795], 00:19:46.591 | 99.99th=[16712] 00:19:46.591 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:19:46.591 slat (usec): min=2, max=2965, avg=96.14, stdev=311.00 00:19:46.591 clat (usec): min=7745, max=15445, avg=12514.74, stdev=761.50 00:19:46.591 lat (usec): min=7755, max=15457, avg=12610.88, stdev=784.26 00:19:46.591 clat percentiles (usec): 00:19:46.591 | 1.00th=[ 9372], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:19:46.591 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:19:46.591 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13435], 00:19:46.591 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15008], 99.95th=[15008], 00:19:46.591 | 99.99th=[15401] 00:19:46.591 bw ( KiB/s): min=20480, max=20480, per=19.01%, avg=20480.00, stdev= 0.00, samples=2 00:19:46.591 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:46.591 lat (msec) : 2=0.01%, 4=0.06%, 10=1.51%, 20=98.42% 00:19:46.591 cpu : usr=2.69%, sys=5.19%, ctx=950, majf=0, minf=2 00:19:46.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:46.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.591 issued rwts: total=4854,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.591 00:19:46.591 Run status group 0 (all jobs): 00:19:46.591 READ: bw=101MiB/s (106MB/s), 14.0MiB/s-37.5MiB/s (14.6MB/s-39.4MB/s), io=101MiB (106MB), run=1001-1003msec 00:19:46.591 WRITE: bw=105MiB/s (110MB/s), 15.5MiB/s-38.0MiB/s (16.3MB/s-39.8MB/s), io=106MiB (111MB), run=1001-1003msec 00:19:46.591 00:19:46.591 Disk stats (read/write): 00:19:46.591 nvme0n1: ios=3121/3159, merge=0/0, ticks=13035/12992, in_queue=26027, util=83.85% 00:19:46.591 nvme0n2: ios=7858/8192, merge=0/0, ticks=12834/12825, in_queue=25659, util=84.97% 00:19:46.591 nvme0n3: ios=6575/6656, merge=0/0, ticks=13208/12486, in_queue=25694, util=88.31% 00:19:46.591 nvme0n4: ios=4096/4138, merge=0/0, ticks=17427/16654, in_queue=34081, util=89.45% 00:19:46.591 23:18:51 -- target/fio.sh@55 -- # sync 00:19:46.591 23:18:51 -- target/fio.sh@59 -- # fio_pid=648765 00:19:46.591 23:18:51 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:46.591 23:18:51 -- target/fio.sh@61 -- # sleep 3 00:19:46.591 [global] 00:19:46.591 thread=1 00:19:46.591 invalidate=1 00:19:46.591 rw=read 00:19:46.591 time_based=1 00:19:46.591 runtime=10 00:19:46.591 ioengine=libaio 00:19:46.591 direct=1 00:19:46.591 bs=4096 00:19:46.591 iodepth=1 00:19:46.591 norandommap=1 00:19:46.591 numjobs=1 00:19:46.591 00:19:46.591 [job0] 00:19:46.591 filename=/dev/nvme0n1 00:19:46.591 [job1] 00:19:46.591 filename=/dev/nvme0n2 00:19:46.591 [job2] 00:19:46.591 filename=/dev/nvme0n3 00:19:46.591 [job3] 00:19:46.591 filename=/dev/nvme0n4 00:19:46.591 Could not set queue depth (nvme0n1) 00:19:46.591 Could not set queue depth (nvme0n2) 00:19:46.591 Could not set queue depth (nvme0n3) 00:19:46.591 Could not set queue depth (nvme0n4) 00:19:46.849 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.849 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.849 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.849 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.849 fio-3.35 00:19:46.849 Starting 4 threads 00:19:49.373 23:18:54 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:49.630 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=99622912, buflen=4096 00:19:49.630 fio: pid=649067, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:49.631 23:18:55 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:49.631 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=100757504, buflen=4096 00:19:49.631 fio: pid=649061, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:49.631 23:18:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:49.631 23:18:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:49.888 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=31531008, buflen=4096 00:19:49.888 fio: pid=649030, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:49.888 23:18:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:49.888 23:18:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:50.146 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=64147456, buflen=4096 00:19:50.146 fio: pid=649043, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:50.146 23:18:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:50.146 23:18:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:50.146 00:19:50.146 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649030: Sat Nov 2 23:18:55 2024 00:19:50.146 read: IOPS=7961, BW=31.1MiB/s (32.6MB/s)(94.1MiB/3025msec) 00:19:50.146 slat (usec): min=3, max=31972, avg=13.64, stdev=287.19 00:19:50.146 clat (usec): min=43, max=371, avg=109.44, stdev=22.98 00:19:50.146 lat (usec): min=50, max=32058, avg=123.08, stdev=288.08 00:19:50.146 clat percentiles (usec): 00:19:50.146 | 1.00th=[ 57], 5.00th=[ 72], 10.00th=[ 76], 20.00th=[ 82], 00:19:50.146 | 30.00th=[ 101], 40.00th=[ 112], 50.00th=[ 117], 60.00th=[ 121], 00:19:50.146 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 139], 00:19:50.146 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 188], 00:19:50.146 | 99.99th=[ 210] 00:19:50.146 bw ( KiB/s): min=29304, max=39280, per=24.41%, avg=31585.60, stdev=4320.82, samples=5 00:19:50.146 iops : min= 7326, max= 9820, avg=7896.40, stdev=1080.20, samples=5 00:19:50.146 lat (usec) : 50=0.07%, 100=29.34%, 250=70.58%, 500=0.01% 00:19:50.146 cpu : usr=3.70%, sys=12.33%, ctx=24089, majf=0, minf=1 00:19:50.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 issued rwts: total=24083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.147 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649043: Sat Nov 2 23:18:55 2024 00:19:50.147 read: IOPS=9866, BW=38.5MiB/s (40.4MB/s)(125MiB/3248msec) 00:19:50.147 slat (usec): min=5, max=16920, avg=11.82, stdev=198.73 00:19:50.147 clat (usec): min=39, max=21789, avg=87.30, stdev=168.14 00:19:50.147 lat (usec): min=56, max=21798, avg=99.12, stdev=260.30 00:19:50.147 clat percentiles (usec): 00:19:50.147 | 1.00th=[ 54], 5.00th=[ 59], 10.00th=[ 69], 20.00th=[ 74], 00:19:50.147 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:19:50.147 | 70.00th=[ 87], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 127], 00:19:50.147 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 169], 99.95th=[ 176], 00:19:50.147 | 99.99th=[ 379] 00:19:50.147 bw ( KiB/s): min=30288, max=44720, per=30.26%, avg=39142.67, stdev=6318.76, samples=6 00:19:50.147 iops : min= 7572, max=11180, avg=9785.67, stdev=1579.69, samples=6 00:19:50.147 lat (usec) : 50=0.03%, 100=78.31%, 250=21.64%, 500=0.01%, 1000=0.01% 00:19:50.147 lat (msec) : 50=0.01% 00:19:50.147 cpu : usr=3.88%, sys=13.55%, ctx=32052, majf=0, minf=2 00:19:50.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 issued rwts: total=32046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.147 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649061: Sat Nov 2 23:18:55 2024 00:19:50.147 read: IOPS=8683, BW=33.9MiB/s (35.6MB/s)(96.1MiB/2833msec) 00:19:50.147 slat (usec): min=3, max=14908, avg=11.46, stdev=121.05 00:19:50.147 clat (usec): min=58, max=311, avg=100.98, stdev=19.55 00:19:50.147 lat (usec): min=61, max=14994, avg=112.44, stdev=122.54 00:19:50.147 clat percentiles (usec): 00:19:50.147 | 1.00th=[ 75], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:19:50.147 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 99], 00:19:50.147 | 70.00th=[ 117], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 135], 00:19:50.147 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 169], 99.95th=[ 172], 00:19:50.147 | 99.99th=[ 186] 00:19:50.147 bw ( KiB/s): min=29304, max=39936, per=26.82%, avg=34699.20, stdev=5065.72, samples=5 00:19:50.147 iops : min= 7326, max= 9984, avg=8674.80, stdev=1266.43, samples=5 00:19:50.147 lat (usec) : 100=61.11%, 250=38.88%, 500=0.01% 00:19:50.147 cpu : usr=5.05%, sys=13.88%, ctx=24602, majf=0, minf=2 00:19:50.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 issued rwts: total=24600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.147 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649067: Sat Nov 2 23:18:55 2024 00:19:50.147 read: IOPS=9227, BW=36.0MiB/s (37.8MB/s)(95.0MiB/2636msec) 00:19:50.147 slat (nsec): min=8377, max=65819, avg=8955.32, stdev=917.18 00:19:50.147 clat (usec): min=32, max=302, avg=96.90, stdev=17.76 00:19:50.147 lat (usec): min=80, max=311, avg=105.85, stdev=17.81 00:19:50.147 clat percentiles (usec): 00:19:50.147 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:19:50.147 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 93], 00:19:50.147 | 70.00th=[ 98], 80.00th=[ 118], 90.00th=[ 127], 95.00th=[ 133], 00:19:50.147 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 169], 99.95th=[ 174], 00:19:50.147 | 99.99th=[ 182] 00:19:50.147 bw ( KiB/s): min=34240, max=40864, per=28.83%, avg=37291.20, stdev=3272.30, samples=5 00:19:50.147 iops : min= 8560, max=10216, avg=9322.80, stdev=818.08, samples=5 00:19:50.147 lat (usec) : 50=0.01%, 100=72.22%, 250=27.77%, 500=0.01% 00:19:50.147 cpu : usr=4.71%, sys=12.71%, ctx=24324, majf=0, minf=2 00:19:50.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.147 issued rwts: total=24323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.147 00:19:50.147 Run status group 0 (all jobs): 00:19:50.147 READ: bw=126MiB/s (132MB/s), 31.1MiB/s-38.5MiB/s (32.6MB/s-40.4MB/s), io=410MiB (430MB), run=2636-3248msec 00:19:50.147 00:19:50.147 Disk stats (read/write): 00:19:50.147 nvme0n1: ios=22414/0, merge=0/0, ticks=2280/0, in_queue=2280, util=93.02% 00:19:50.147 nvme0n2: ios=30032/0, merge=0/0, ticks=2377/0, in_queue=2377, util=92.53% 00:19:50.147 nvme0n3: ios=22507/0, merge=0/0, ticks=2165/0, in_queue=2165, util=96.06% 00:19:50.147 nvme0n4: ios=24082/0, merge=0/0, ticks=2135/0, in_queue=2135, util=96.46% 00:19:50.404 23:18:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:50.404 23:18:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:50.662 23:18:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:50.662 23:18:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:50.919 23:18:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:50.919 23:18:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:50.919 23:18:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:50.919 23:18:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:51.177 23:18:56 -- target/fio.sh@69 -- # fio_status=0 00:19:51.177 23:18:56 -- target/fio.sh@70 -- # wait 648765 00:19:51.177 23:18:56 -- target/fio.sh@70 -- # fio_status=4 00:19:51.177 23:18:56 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:52.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:52.109 23:18:57 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:52.109 23:18:57 -- common/autotest_common.sh@1198 -- # local i=0 00:19:52.109 23:18:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:52.109 23:18:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.109 23:18:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:52.109 23:18:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.109 23:18:57 -- common/autotest_common.sh@1210 -- # return 0 00:19:52.109 23:18:57 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:52.109 23:18:57 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:52.109 nvmf hotplug test: fio failed as expected 00:19:52.109 23:18:57 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.366 23:18:57 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:52.366 23:18:57 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:52.366 23:18:57 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:52.366 23:18:57 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:52.366 23:18:57 -- target/fio.sh@91 -- # nvmftestfini 00:19:52.366 23:18:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.366 23:18:57 -- nvmf/common.sh@116 -- # sync 00:19:52.366 23:18:57 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:52.366 23:18:57 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:52.366 23:18:57 -- nvmf/common.sh@119 -- # set +e 00:19:52.366 23:18:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.366 23:18:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:52.366 rmmod nvme_rdma 00:19:52.366 rmmod nvme_fabrics 00:19:52.366 23:18:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.366 23:18:58 -- nvmf/common.sh@123 -- # set -e 00:19:52.366 23:18:58 -- nvmf/common.sh@124 -- # return 0 00:19:52.366 23:18:58 -- nvmf/common.sh@477 -- # '[' -n 645892 ']' 00:19:52.366 23:18:58 -- nvmf/common.sh@478 -- # killprocess 645892 00:19:52.366 23:18:58 -- common/autotest_common.sh@926 -- # '[' -z 645892 ']' 00:19:52.366 23:18:58 -- common/autotest_common.sh@930 -- # kill -0 645892 00:19:52.366 23:18:58 -- common/autotest_common.sh@931 -- # uname 00:19:52.366 23:18:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.366 23:18:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 645892 00:19:52.624 23:18:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:52.624 23:18:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:52.624 23:18:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 645892' 00:19:52.624 killing process with pid 645892 00:19:52.624 23:18:58 -- common/autotest_common.sh@945 -- # kill 645892 00:19:52.624 23:18:58 -- common/autotest_common.sh@950 -- # wait 645892 00:19:52.882 23:18:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:52.882 23:18:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:52.882 00:19:52.882 real 0m26.317s 00:19:52.882 user 2m9.504s 00:19:52.882 sys 0m10.009s 00:19:52.882 23:18:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.882 23:18:58 -- common/autotest_common.sh@10 -- # set +x 00:19:52.882 ************************************ 00:19:52.882 END TEST nvmf_fio_target 00:19:52.882 ************************************ 00:19:52.882 23:18:58 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:52.882 23:18:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:52.882 23:18:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:52.882 23:18:58 -- common/autotest_common.sh@10 -- # set +x 00:19:52.882 ************************************ 00:19:52.882 START TEST nvmf_bdevio 00:19:52.882 ************************************ 00:19:52.882 23:18:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:52.882 * Looking for test storage... 00:19:52.882 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:52.882 23:18:58 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.882 23:18:58 -- nvmf/common.sh@7 -- # uname -s 00:19:52.882 23:18:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.882 23:18:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.882 23:18:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.882 23:18:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.882 23:18:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.882 23:18:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.882 23:18:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.882 23:18:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.882 23:18:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.882 23:18:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.882 23:18:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:52.882 23:18:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:52.882 23:18:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.882 23:18:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.882 23:18:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.882 23:18:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:52.882 23:18:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.882 23:18:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.882 23:18:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.882 23:18:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.882 23:18:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.882 23:18:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.882 23:18:58 -- paths/export.sh@5 -- # export PATH 00:19:52.882 23:18:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.882 23:18:58 -- nvmf/common.sh@46 -- # : 0 00:19:52.882 23:18:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:52.882 23:18:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:52.882 23:18:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:52.882 23:18:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.882 23:18:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.882 23:18:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:52.882 23:18:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:52.882 23:18:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:52.882 23:18:58 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.882 23:18:58 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.882 23:18:58 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:52.882 23:18:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:52.882 23:18:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.882 23:18:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:52.882 23:18:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:52.882 23:18:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:52.882 23:18:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.882 23:18:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.882 23:18:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.882 23:18:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:52.882 23:18:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:52.882 23:18:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:52.882 23:18:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.437 23:19:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:59.437 23:19:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:59.438 23:19:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:59.438 23:19:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:59.438 23:19:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:59.438 23:19:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:59.438 23:19:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:59.438 23:19:05 -- nvmf/common.sh@294 -- # net_devs=() 00:19:59.438 23:19:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:59.438 23:19:05 -- nvmf/common.sh@295 -- # e810=() 00:19:59.438 23:19:05 -- nvmf/common.sh@295 -- # local -ga e810 00:19:59.438 23:19:05 -- nvmf/common.sh@296 -- # x722=() 00:19:59.438 23:19:05 -- nvmf/common.sh@296 -- # local -ga x722 00:19:59.438 23:19:05 -- nvmf/common.sh@297 -- # mlx=() 00:19:59.438 23:19:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:59.438 23:19:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.438 23:19:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:59.438 23:19:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:59.438 23:19:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:59.438 23:19:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:59.438 23:19:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:59.438 23:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:59.438 23:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:59.438 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:59.438 23:19:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:59.438 23:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:59.438 23:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:59.438 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:59.438 23:19:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:59.438 23:19:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:59.438 23:19:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:59.438 23:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.438 23:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:59.438 23:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.438 23:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:59.438 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:59.438 23:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.438 23:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:59.438 23:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.438 23:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:59.438 23:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.438 23:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:59.438 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:59.438 23:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.438 23:19:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:59.438 23:19:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:59.438 23:19:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:59.438 23:19:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:59.438 23:19:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:59.438 23:19:05 -- nvmf/common.sh@57 -- # uname 00:19:59.438 23:19:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:59.438 23:19:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:59.438 23:19:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:59.438 23:19:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:59.696 23:19:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:59.696 23:19:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:59.696 23:19:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:59.696 23:19:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:59.696 23:19:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:59.696 23:19:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:59.696 23:19:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:59.696 23:19:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:59.696 23:19:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:59.696 23:19:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:59.696 23:19:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:59.696 23:19:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:59.696 23:19:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:59.696 23:19:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.696 23:19:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:59.696 23:19:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:59.696 23:19:05 -- nvmf/common.sh@104 -- # continue 2 00:19:59.696 23:19:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:59.696 23:19:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.696 23:19:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:59.696 23:19:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.696 23:19:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:59.696 23:19:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:59.696 23:19:05 -- nvmf/common.sh@104 -- # continue 2 00:19:59.696 23:19:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:59.696 23:19:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:59.696 23:19:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:59.696 23:19:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:59.696 23:19:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:59.696 23:19:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:59.696 23:19:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:59.696 23:19:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:59.696 23:19:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:59.696 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:59.696 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:59.696 altname enp217s0f0np0 00:19:59.696 altname ens818f0np0 00:19:59.696 inet 192.168.100.8/24 scope global mlx_0_0 00:19:59.696 valid_lft forever preferred_lft forever 00:19:59.696 23:19:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:59.696 23:19:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:59.696 23:19:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:59.696 23:19:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:59.696 23:19:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:59.696 23:19:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:59.696 23:19:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:59.696 23:19:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:59.696 23:19:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:59.696 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:59.696 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:59.696 altname enp217s0f1np1 00:19:59.696 altname ens818f1np1 00:19:59.696 inet 192.168.100.9/24 scope global mlx_0_1 00:19:59.696 valid_lft forever preferred_lft forever 00:19:59.696 23:19:05 -- nvmf/common.sh@410 -- # return 0 00:19:59.696 23:19:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:59.696 23:19:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:59.696 23:19:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:59.696 23:19:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:59.696 23:19:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:59.696 23:19:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:59.696 23:19:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:59.696 23:19:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:59.696 23:19:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:59.696 23:19:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:59.696 23:19:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:59.696 23:19:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.697 23:19:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:59.697 23:19:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:59.697 23:19:05 -- nvmf/common.sh@104 -- # continue 2 00:19:59.697 23:19:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:59.697 23:19:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.697 23:19:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:59.697 23:19:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.697 23:19:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:59.697 23:19:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:59.697 23:19:05 -- nvmf/common.sh@104 -- # continue 2 00:19:59.697 23:19:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:59.697 23:19:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:59.697 23:19:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:59.697 23:19:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:59.697 23:19:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:59.697 23:19:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:59.697 23:19:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:59.697 23:19:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:59.697 23:19:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:59.697 23:19:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:59.697 23:19:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:59.697 23:19:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:59.697 23:19:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:59.697 192.168.100.9' 00:19:59.697 23:19:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:59.697 192.168.100.9' 00:19:59.697 23:19:05 -- nvmf/common.sh@445 -- # head -n 1 00:19:59.697 23:19:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:59.697 23:19:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:59.697 192.168.100.9' 00:19:59.697 23:19:05 -- nvmf/common.sh@446 -- # tail -n +2 00:19:59.697 23:19:05 -- nvmf/common.sh@446 -- # head -n 1 00:19:59.697 23:19:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:59.697 23:19:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:59.697 23:19:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:59.697 23:19:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:59.697 23:19:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:59.697 23:19:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:59.697 23:19:05 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:59.697 23:19:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:59.697 23:19:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:59.697 23:19:05 -- common/autotest_common.sh@10 -- # set +x 00:19:59.697 23:19:05 -- nvmf/common.sh@469 -- # nvmfpid=653443 00:19:59.697 23:19:05 -- nvmf/common.sh@470 -- # waitforlisten 653443 00:19:59.697 23:19:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:59.697 23:19:05 -- common/autotest_common.sh@819 -- # '[' -z 653443 ']' 00:19:59.697 23:19:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.697 23:19:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:59.697 23:19:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.697 23:19:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:59.697 23:19:05 -- common/autotest_common.sh@10 -- # set +x 00:19:59.955 [2024-11-02 23:19:05.452591] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:59.955 [2024-11-02 23:19:05.452640] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.955 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.955 [2024-11-02 23:19:05.521631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.955 [2024-11-02 23:19:05.593640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:59.955 [2024-11-02 23:19:05.593749] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.955 [2024-11-02 23:19:05.593758] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.955 [2024-11-02 23:19:05.593767] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.955 [2024-11-02 23:19:05.593886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:59.955 [2024-11-02 23:19:05.594006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:59.955 [2024-11-02 23:19:05.594113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.955 [2024-11-02 23:19:05.594115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:00.520 23:19:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:00.520 23:19:06 -- common/autotest_common.sh@852 -- # return 0 00:20:00.520 23:19:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:00.520 23:19:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:00.520 23:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.777 23:19:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.777 23:19:06 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:00.777 23:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.777 23:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.777 [2024-11-02 23:19:06.346746] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfc3970/0xfc7e60) succeed. 00:20:00.777 [2024-11-02 23:19:06.355945] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfc4f60/0x1009500) succeed. 00:20:00.778 23:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.778 23:19:06 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:00.778 23:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.778 23:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.778 Malloc0 00:20:00.778 23:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.778 23:19:06 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.778 23:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.778 23:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.778 23:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.778 23:19:06 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:00.778 23:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.778 23:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.778 23:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.778 23:19:06 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:00.778 23:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.778 23:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:00.778 [2024-11-02 23:19:06.514154] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:00.778 23:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.778 23:19:06 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:00.778 23:19:06 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:00.778 23:19:06 -- nvmf/common.sh@520 -- # config=() 00:20:00.778 23:19:06 -- nvmf/common.sh@520 -- # local subsystem config 00:20:00.778 23:19:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:00.778 23:19:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:00.778 { 00:20:00.778 "params": { 00:20:00.778 "name": "Nvme$subsystem", 00:20:00.778 "trtype": "$TEST_TRANSPORT", 00:20:00.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.778 "adrfam": "ipv4", 00:20:00.778 "trsvcid": "$NVMF_PORT", 00:20:00.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.778 "hdgst": ${hdgst:-false}, 00:20:00.778 "ddgst": ${ddgst:-false} 00:20:00.778 }, 00:20:00.778 "method": "bdev_nvme_attach_controller" 00:20:00.778 } 00:20:00.778 EOF 00:20:00.778 )") 00:20:00.778 23:19:06 -- nvmf/common.sh@542 -- # cat 00:20:00.778 23:19:06 -- nvmf/common.sh@544 -- # jq . 00:20:00.778 23:19:06 -- nvmf/common.sh@545 -- # IFS=, 00:20:01.036 23:19:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:01.036 "params": { 00:20:01.036 "name": "Nvme1", 00:20:01.036 "trtype": "rdma", 00:20:01.036 "traddr": "192.168.100.8", 00:20:01.036 "adrfam": "ipv4", 00:20:01.036 "trsvcid": "4420", 00:20:01.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.036 "hdgst": false, 00:20:01.036 "ddgst": false 00:20:01.036 }, 00:20:01.036 "method": "bdev_nvme_attach_controller" 00:20:01.036 }' 00:20:01.036 [2024-11-02 23:19:06.563975] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:01.036 [2024-11-02 23:19:06.564027] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653516 ] 00:20:01.036 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.036 [2024-11-02 23:19:06.635618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:01.036 [2024-11-02 23:19:06.706595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.036 [2024-11-02 23:19:06.706691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.036 [2024-11-02 23:19:06.706693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.295 [2024-11-02 23:19:06.883419] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:01.295 [2024-11-02 23:19:06.883452] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:01.295 I/O targets: 00:20:01.295 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:01.295 00:20:01.295 00:20:01.295 CUnit - A unit testing framework for C - Version 2.1-3 00:20:01.295 http://cunit.sourceforge.net/ 00:20:01.295 00:20:01.295 00:20:01.295 Suite: bdevio tests on: Nvme1n1 00:20:01.295 Test: blockdev write read block ...passed 00:20:01.295 Test: blockdev write zeroes read block ...passed 00:20:01.295 Test: blockdev write zeroes read no split ...passed 00:20:01.295 Test: blockdev write zeroes read split ...passed 00:20:01.295 Test: blockdev write zeroes read split partial ...passed 00:20:01.295 Test: blockdev reset ...[2024-11-02 23:19:06.913135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:01.295 [2024-11-02 23:19:06.936100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.295 [2024-11-02 23:19:06.962918] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:01.295 passed 00:20:01.295 Test: blockdev write read 8 blocks ...passed 00:20:01.295 Test: blockdev write read size > 128k ...passed 00:20:01.295 Test: blockdev write read invalid size ...passed 00:20:01.295 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:01.295 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:01.295 Test: blockdev write read max offset ...passed 00:20:01.295 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:01.295 Test: blockdev writev readv 8 blocks ...passed 00:20:01.295 Test: blockdev writev readv 30 x 1block ...passed 00:20:01.295 Test: blockdev writev readv block ...passed 00:20:01.295 Test: blockdev writev readv size > 128k ...passed 00:20:01.295 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:01.295 Test: blockdev comparev and writev ...[2024-11-02 23:19:06.965824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.965852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.965864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.965874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.966050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.966070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.966237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.966256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.966430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:01.295 [2024-11-02 23:19:06.966449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:01.295 passed 00:20:01.295 Test: blockdev nvme passthru rw ...passed 00:20:01.295 Test: blockdev nvme passthru vendor specific ...[2024-11-02 23:19:06.966695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:01.295 [2024-11-02 23:19:06.966706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:01.295 [2024-11-02 23:19:06.966761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:01.295 [2024-11-02 23:19:06.966810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:01.295 [2024-11-02 23:19:06.966850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:01.295 [2024-11-02 23:19:06.966863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:01.295 passed 00:20:01.295 Test: blockdev nvme admin passthru ...passed 00:20:01.295 Test: blockdev copy ...passed 00:20:01.295 00:20:01.295 Run Summary: Type Total Ran Passed Failed Inactive 00:20:01.295 suites 1 1 n/a 0 0 00:20:01.295 tests 23 23 23 0 0 00:20:01.295 asserts 152 152 152 0 n/a 00:20:01.295 00:20:01.295 Elapsed time = 0.170 seconds 00:20:01.554 23:19:07 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.554 23:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.554 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:20:01.554 23:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.554 23:19:07 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:01.554 23:19:07 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:01.554 23:19:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:01.554 23:19:07 -- nvmf/common.sh@116 -- # sync 00:20:01.554 23:19:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:01.554 23:19:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:01.554 23:19:07 -- nvmf/common.sh@119 -- # set +e 00:20:01.554 23:19:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:01.554 23:19:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:01.554 rmmod nvme_rdma 00:20:01.554 rmmod nvme_fabrics 00:20:01.554 23:19:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:01.554 23:19:07 -- nvmf/common.sh@123 -- # set -e 00:20:01.554 23:19:07 -- nvmf/common.sh@124 -- # return 0 00:20:01.554 23:19:07 -- nvmf/common.sh@477 -- # '[' -n 653443 ']' 00:20:01.554 23:19:07 -- nvmf/common.sh@478 -- # killprocess 653443 00:20:01.554 23:19:07 -- common/autotest_common.sh@926 -- # '[' -z 653443 ']' 00:20:01.554 23:19:07 -- common/autotest_common.sh@930 -- # kill -0 653443 00:20:01.554 23:19:07 -- common/autotest_common.sh@931 -- # uname 00:20:01.554 23:19:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:01.554 23:19:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 653443 00:20:01.554 23:19:07 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:01.554 23:19:07 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:01.554 23:19:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 653443' 00:20:01.554 killing process with pid 653443 00:20:01.554 23:19:07 -- common/autotest_common.sh@945 -- # kill 653443 00:20:01.554 23:19:07 -- common/autotest_common.sh@950 -- # wait 653443 00:20:02.121 23:19:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:02.121 23:19:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:02.121 00:20:02.121 real 0m9.136s 00:20:02.121 user 0m10.935s 00:20:02.121 sys 0m5.778s 00:20:02.121 23:19:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.121 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.121 ************************************ 00:20:02.121 END TEST nvmf_bdevio 00:20:02.121 ************************************ 00:20:02.121 23:19:07 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:02.121 23:19:07 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:02.121 23:19:07 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:02.121 23:19:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:02.121 23:19:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.121 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.121 ************************************ 00:20:02.121 START TEST nvmf_fuzz 00:20:02.121 ************************************ 00:20:02.121 23:19:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:02.121 * Looking for test storage... 00:20:02.121 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:02.121 23:19:07 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.121 23:19:07 -- nvmf/common.sh@7 -- # uname -s 00:20:02.121 23:19:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.121 23:19:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.121 23:19:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.121 23:19:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.121 23:19:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.121 23:19:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.121 23:19:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.121 23:19:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.121 23:19:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.121 23:19:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.121 23:19:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:02.121 23:19:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:02.121 23:19:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.121 23:19:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.121 23:19:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.121 23:19:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:02.121 23:19:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.121 23:19:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.121 23:19:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.121 23:19:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.121 23:19:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.121 23:19:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.121 23:19:07 -- paths/export.sh@5 -- # export PATH 00:20:02.121 23:19:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.121 23:19:07 -- nvmf/common.sh@46 -- # : 0 00:20:02.121 23:19:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:02.121 23:19:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:02.121 23:19:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:02.121 23:19:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.121 23:19:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.121 23:19:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:02.121 23:19:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:02.121 23:19:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:02.121 23:19:07 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:02.121 23:19:07 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:02.121 23:19:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.121 23:19:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:02.121 23:19:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:02.121 23:19:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:02.121 23:19:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.121 23:19:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.121 23:19:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.121 23:19:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:02.121 23:19:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:02.121 23:19:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:02.121 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.682 23:19:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:08.682 23:19:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:08.682 23:19:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:08.682 23:19:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:08.682 23:19:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:08.682 23:19:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:08.682 23:19:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:08.682 23:19:14 -- nvmf/common.sh@294 -- # net_devs=() 00:20:08.682 23:19:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:08.682 23:19:14 -- nvmf/common.sh@295 -- # e810=() 00:20:08.682 23:19:14 -- nvmf/common.sh@295 -- # local -ga e810 00:20:08.683 23:19:14 -- nvmf/common.sh@296 -- # x722=() 00:20:08.683 23:19:14 -- nvmf/common.sh@296 -- # local -ga x722 00:20:08.683 23:19:14 -- nvmf/common.sh@297 -- # mlx=() 00:20:08.683 23:19:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:08.683 23:19:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.683 23:19:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:08.683 23:19:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:08.683 23:19:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:08.683 23:19:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:08.683 23:19:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:08.683 23:19:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:08.683 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:08.683 23:19:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:08.683 23:19:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:08.683 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:08.683 23:19:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:08.683 23:19:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:08.683 23:19:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.683 23:19:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:08.683 23:19:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.683 23:19:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:08.683 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:08.683 23:19:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.683 23:19:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.683 23:19:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:08.683 23:19:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.683 23:19:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:08.683 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:08.683 23:19:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.683 23:19:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:08.683 23:19:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:08.683 23:19:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:08.683 23:19:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:08.683 23:19:14 -- nvmf/common.sh@57 -- # uname 00:20:08.683 23:19:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:08.683 23:19:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:08.683 23:19:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:08.683 23:19:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:08.683 23:19:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:08.683 23:19:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:08.683 23:19:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:08.683 23:19:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:08.683 23:19:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:08.683 23:19:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:08.683 23:19:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:08.683 23:19:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:08.683 23:19:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:08.683 23:19:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:08.683 23:19:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:08.683 23:19:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:08.683 23:19:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:08.683 23:19:14 -- nvmf/common.sh@104 -- # continue 2 00:20:08.683 23:19:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.683 23:19:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:08.683 23:19:14 -- nvmf/common.sh@104 -- # continue 2 00:20:08.683 23:19:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:08.683 23:19:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:08.683 23:19:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:08.683 23:19:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:08.683 23:19:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:08.683 23:19:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:08.683 23:19:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:08.683 23:19:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:08.683 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:08.683 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:08.683 altname enp217s0f0np0 00:20:08.683 altname ens818f0np0 00:20:08.683 inet 192.168.100.8/24 scope global mlx_0_0 00:20:08.683 valid_lft forever preferred_lft forever 00:20:08.683 23:19:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:08.683 23:19:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:08.683 23:19:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:08.683 23:19:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:08.683 23:19:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:08.683 23:19:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:08.683 23:19:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:08.683 23:19:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:08.683 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:08.683 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:08.683 altname enp217s0f1np1 00:20:08.683 altname ens818f1np1 00:20:08.683 inet 192.168.100.9/24 scope global mlx_0_1 00:20:08.683 valid_lft forever preferred_lft forever 00:20:08.683 23:19:14 -- nvmf/common.sh@410 -- # return 0 00:20:08.683 23:19:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:08.683 23:19:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:08.683 23:19:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:08.683 23:19:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:08.683 23:19:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:08.683 23:19:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:08.683 23:19:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:08.942 23:19:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:08.942 23:19:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:08.942 23:19:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:08.942 23:19:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:08.942 23:19:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.942 23:19:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:08.942 23:19:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:08.942 23:19:14 -- nvmf/common.sh@104 -- # continue 2 00:20:08.942 23:19:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:08.942 23:19:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.942 23:19:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:08.942 23:19:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.942 23:19:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:08.942 23:19:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:08.942 23:19:14 -- nvmf/common.sh@104 -- # continue 2 00:20:08.942 23:19:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:08.942 23:19:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:08.942 23:19:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:08.942 23:19:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:08.942 23:19:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:08.942 23:19:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:08.942 23:19:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:08.942 23:19:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:08.942 23:19:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:08.942 23:19:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:08.942 23:19:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:08.942 23:19:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:08.942 23:19:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:08.942 192.168.100.9' 00:20:08.942 23:19:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:08.942 192.168.100.9' 00:20:08.942 23:19:14 -- nvmf/common.sh@445 -- # head -n 1 00:20:08.942 23:19:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:08.942 23:19:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:08.942 192.168.100.9' 00:20:08.942 23:19:14 -- nvmf/common.sh@446 -- # tail -n +2 00:20:08.942 23:19:14 -- nvmf/common.sh@446 -- # head -n 1 00:20:08.942 23:19:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:08.943 23:19:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:08.943 23:19:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:08.943 23:19:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:08.943 23:19:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:08.943 23:19:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:08.943 23:19:14 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:08.943 23:19:14 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=656979 00:20:08.943 23:19:14 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:08.943 23:19:14 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 656979 00:20:08.943 23:19:14 -- common/autotest_common.sh@819 -- # '[' -z 656979 ']' 00:20:08.943 23:19:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.943 23:19:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:08.943 23:19:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.943 23:19:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:08.943 23:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:09.877 23:19:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:09.877 23:19:15 -- common/autotest_common.sh@852 -- # return 0 00:20:09.877 23:19:15 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:09.877 23:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.877 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.877 23:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.877 23:19:15 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:09.877 23:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.878 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.878 Malloc0 00:20:09.878 23:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.878 23:19:15 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.878 23:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.878 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.878 23:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.878 23:19:15 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:09.878 23:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.878 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.878 23:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.878 23:19:15 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:09.878 23:19:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.878 23:19:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.878 23:19:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.878 23:19:15 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:09.878 23:19:15 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:20:41.947 Fuzzing completed. Shutting down the fuzz application 00:20:41.947 00:20:41.947 Dumping successful admin opcodes: 00:20:41.947 8, 9, 10, 24, 00:20:41.947 Dumping successful io opcodes: 00:20:41.947 0, 9, 00:20:41.947 NS: 0x200003af1f00 I/O qp, Total commands completed: 1095148, total successful commands: 6434, random_seed: 402154816 00:20:41.947 NS: 0x200003af1f00 admin qp, Total commands completed: 138320, total successful commands: 1120, random_seed: 3500324672 00:20:41.948 23:19:45 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:41.948 Fuzzing completed. Shutting down the fuzz application 00:20:41.948 00:20:41.948 Dumping successful admin opcodes: 00:20:41.948 24, 00:20:41.948 Dumping successful io opcodes: 00:20:41.948 00:20:41.948 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3629864628 00:20:41.948 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3629942578 00:20:41.948 23:19:47 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.948 23:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:41.948 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.948 23:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:41.948 23:19:47 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:41.948 23:19:47 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:41.948 23:19:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:41.948 23:19:47 -- nvmf/common.sh@116 -- # sync 00:20:41.948 23:19:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:41.948 23:19:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:41.948 23:19:47 -- nvmf/common.sh@119 -- # set +e 00:20:41.948 23:19:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:41.948 23:19:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:41.948 rmmod nvme_rdma 00:20:41.948 rmmod nvme_fabrics 00:20:41.948 23:19:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:41.948 23:19:47 -- nvmf/common.sh@123 -- # set -e 00:20:41.948 23:19:47 -- nvmf/common.sh@124 -- # return 0 00:20:41.948 23:19:47 -- nvmf/common.sh@477 -- # '[' -n 656979 ']' 00:20:41.948 23:19:47 -- nvmf/common.sh@478 -- # killprocess 656979 00:20:41.948 23:19:47 -- common/autotest_common.sh@926 -- # '[' -z 656979 ']' 00:20:41.948 23:19:47 -- common/autotest_common.sh@930 -- # kill -0 656979 00:20:41.948 23:19:47 -- common/autotest_common.sh@931 -- # uname 00:20:41.948 23:19:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:41.948 23:19:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 656979 00:20:41.948 23:19:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:41.948 23:19:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:41.948 23:19:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 656979' 00:20:41.948 killing process with pid 656979 00:20:41.948 23:19:47 -- common/autotest_common.sh@945 -- # kill 656979 00:20:41.948 23:19:47 -- common/autotest_common.sh@950 -- # wait 656979 00:20:41.948 23:19:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:41.948 23:19:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:41.948 23:19:47 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:42.207 00:20:42.207 real 0m40.066s 00:20:42.207 user 0m51.722s 00:20:42.207 sys 0m20.176s 00:20:42.207 23:19:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.207 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.207 ************************************ 00:20:42.207 END TEST nvmf_fuzz 00:20:42.207 ************************************ 00:20:42.207 23:19:47 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:42.207 23:19:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:42.207 23:19:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:42.207 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.207 ************************************ 00:20:42.207 START TEST nvmf_multiconnection 00:20:42.207 ************************************ 00:20:42.207 23:19:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:42.207 * Looking for test storage... 00:20:42.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:42.207 23:19:47 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.207 23:19:47 -- nvmf/common.sh@7 -- # uname -s 00:20:42.207 23:19:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.207 23:19:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.207 23:19:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.207 23:19:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.207 23:19:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.207 23:19:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.207 23:19:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.207 23:19:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.207 23:19:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.207 23:19:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.207 23:19:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.207 23:19:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:42.207 23:19:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.207 23:19:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.207 23:19:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.207 23:19:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:42.207 23:19:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.207 23:19:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.207 23:19:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.207 23:19:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.207 23:19:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.207 23:19:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.207 23:19:47 -- paths/export.sh@5 -- # export PATH 00:20:42.207 23:19:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.207 23:19:47 -- nvmf/common.sh@46 -- # : 0 00:20:42.207 23:19:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:42.207 23:19:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:42.207 23:19:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:42.207 23:19:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.207 23:19:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.207 23:19:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:42.207 23:19:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:42.207 23:19:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:42.207 23:19:47 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.207 23:19:47 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.207 23:19:47 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:42.207 23:19:47 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:42.207 23:19:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:42.207 23:19:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.207 23:19:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:42.207 23:19:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:42.207 23:19:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:42.207 23:19:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.207 23:19:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.207 23:19:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.207 23:19:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:42.207 23:19:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:42.207 23:19:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:42.207 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:20:48.811 23:19:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:48.811 23:19:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:48.811 23:19:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:48.811 23:19:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:48.811 23:19:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:48.811 23:19:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:48.811 23:19:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:48.811 23:19:54 -- nvmf/common.sh@294 -- # net_devs=() 00:20:48.811 23:19:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:48.811 23:19:54 -- nvmf/common.sh@295 -- # e810=() 00:20:48.811 23:19:54 -- nvmf/common.sh@295 -- # local -ga e810 00:20:48.811 23:19:54 -- nvmf/common.sh@296 -- # x722=() 00:20:48.811 23:19:54 -- nvmf/common.sh@296 -- # local -ga x722 00:20:48.811 23:19:54 -- nvmf/common.sh@297 -- # mlx=() 00:20:48.811 23:19:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:48.811 23:19:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.811 23:19:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:48.811 23:19:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:48.811 23:19:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:48.811 23:19:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:48.811 23:19:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:48.811 23:19:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:48.811 23:19:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:48.811 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:48.811 23:19:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.811 23:19:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:48.811 23:19:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:48.811 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:48.811 23:19:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.811 23:19:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:48.811 23:19:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:48.811 23:19:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.811 23:19:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:48.811 23:19:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.811 23:19:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:48.811 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:48.811 23:19:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.811 23:19:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:48.811 23:19:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.811 23:19:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:48.811 23:19:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.811 23:19:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:48.811 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:48.811 23:19:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.811 23:19:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:48.811 23:19:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:48.811 23:19:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:48.811 23:19:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:48.811 23:19:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:48.811 23:19:54 -- nvmf/common.sh@57 -- # uname 00:20:48.812 23:19:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:48.812 23:19:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:48.812 23:19:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:48.812 23:19:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:48.812 23:19:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:48.812 23:19:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:48.812 23:19:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:48.812 23:19:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:48.812 23:19:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:48.812 23:19:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:48.812 23:19:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:48.812 23:19:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:48.812 23:19:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:48.812 23:19:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:48.812 23:19:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:48.812 23:19:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:48.812 23:19:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:48.812 23:19:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.812 23:19:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:48.812 23:19:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:48.812 23:19:54 -- nvmf/common.sh@104 -- # continue 2 00:20:48.812 23:19:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:48.812 23:19:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.812 23:19:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:48.812 23:19:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.812 23:19:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:48.812 23:19:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:48.812 23:19:54 -- nvmf/common.sh@104 -- # continue 2 00:20:48.812 23:19:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:48.812 23:19:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:48.812 23:19:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:48.812 23:19:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:48.812 23:19:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:48.812 23:19:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:48.812 23:19:54 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:48.812 23:19:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:48.812 23:19:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:48.812 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:48.812 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:48.812 altname enp217s0f0np0 00:20:48.812 altname ens818f0np0 00:20:48.812 inet 192.168.100.8/24 scope global mlx_0_0 00:20:48.812 valid_lft forever preferred_lft forever 00:20:48.812 23:19:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:48.812 23:19:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:48.812 23:19:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:48.812 23:19:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:48.812 23:19:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:48.812 23:19:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:49.070 23:19:54 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:49.070 23:19:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:49.070 23:19:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:49.070 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:49.070 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:49.070 altname enp217s0f1np1 00:20:49.070 altname ens818f1np1 00:20:49.070 inet 192.168.100.9/24 scope global mlx_0_1 00:20:49.070 valid_lft forever preferred_lft forever 00:20:49.070 23:19:54 -- nvmf/common.sh@410 -- # return 0 00:20:49.070 23:19:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:49.070 23:19:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:49.070 23:19:54 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:49.070 23:19:54 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:49.070 23:19:54 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:49.070 23:19:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:49.070 23:19:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:49.070 23:19:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:49.070 23:19:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:49.070 23:19:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:49.070 23:19:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:49.070 23:19:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.070 23:19:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:49.070 23:19:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:49.070 23:19:54 -- nvmf/common.sh@104 -- # continue 2 00:20:49.070 23:19:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:49.070 23:19:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.070 23:19:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:49.070 23:19:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.070 23:19:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:49.070 23:19:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:49.070 23:19:54 -- nvmf/common.sh@104 -- # continue 2 00:20:49.070 23:19:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:49.070 23:19:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:49.070 23:19:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:49.070 23:19:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:49.070 23:19:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:49.070 23:19:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:49.070 23:19:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:49.070 23:19:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:49.070 23:19:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:49.070 23:19:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:49.070 23:19:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:49.070 23:19:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:49.070 23:19:54 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:49.070 192.168.100.9' 00:20:49.070 23:19:54 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:49.070 192.168.100.9' 00:20:49.070 23:19:54 -- nvmf/common.sh@445 -- # head -n 1 00:20:49.070 23:19:54 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:49.070 23:19:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:49.070 192.168.100.9' 00:20:49.070 23:19:54 -- nvmf/common.sh@446 -- # tail -n +2 00:20:49.070 23:19:54 -- nvmf/common.sh@446 -- # head -n 1 00:20:49.070 23:19:54 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:49.070 23:19:54 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:49.070 23:19:54 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:49.070 23:19:54 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:49.070 23:19:54 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:49.070 23:19:54 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:49.070 23:19:54 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:49.070 23:19:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:49.070 23:19:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:49.070 23:19:54 -- common/autotest_common.sh@10 -- # set +x 00:20:49.070 23:19:54 -- nvmf/common.sh@469 -- # nvmfpid=666016 00:20:49.070 23:19:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.070 23:19:54 -- nvmf/common.sh@470 -- # waitforlisten 666016 00:20:49.070 23:19:54 -- common/autotest_common.sh@819 -- # '[' -z 666016 ']' 00:20:49.070 23:19:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.070 23:19:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:49.070 23:19:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.070 23:19:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:49.070 23:19:54 -- common/autotest_common.sh@10 -- # set +x 00:20:49.070 [2024-11-02 23:19:54.737523] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:49.070 [2024-11-02 23:19:54.737570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.070 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.070 [2024-11-02 23:19:54.809029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.329 [2024-11-02 23:19:54.884094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:49.329 [2024-11-02 23:19:54.884226] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.329 [2024-11-02 23:19:54.884236] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.329 [2024-11-02 23:19:54.884245] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.329 [2024-11-02 23:19:54.884292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.329 [2024-11-02 23:19:54.884394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.329 [2024-11-02 23:19:54.884413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.329 [2024-11-02 23:19:54.884421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.894 23:19:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.894 23:19:55 -- common/autotest_common.sh@852 -- # return 0 00:20:49.894 23:19:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:49.894 23:19:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:49.894 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:49.894 23:19:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.894 23:19:55 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:49.894 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.894 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:49.894 [2024-11-02 23:19:55.616408] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1367090/0x136b580) succeed. 00:20:49.894 [2024-11-02 23:19:55.625665] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1368680/0x13acc20) succeed. 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:50.152 23:19:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.152 23:19:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 Malloc1 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 [2024-11-02 23:19:55.804707] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.152 23:19:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 Malloc2 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.152 23:19:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 Malloc3 00:20:50.152 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.152 23:19:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:50.152 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.152 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.153 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.153 23:19:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:50.153 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.153 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.153 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.153 23:19:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:50.153 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.153 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.153 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.153 23:19:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.153 23:19:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:50.153 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.153 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 Malloc4 00:20:50.411 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:50.411 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:50.411 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:50.411 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.411 23:19:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:50.411 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 Malloc5 00:20:50.411 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:50.411 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:50.411 23:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.411 23:19:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 Malloc6 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.411 23:19:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 Malloc7 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.411 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.411 23:19:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.411 23:19:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:50.411 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.412 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 Malloc8 00:20:50.412 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.412 23:19:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:50.412 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.412 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.412 23:19:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:50.412 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.412 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.412 23:19:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:20:50.412 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.412 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.412 23:19:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.412 23:19:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:50.412 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.412 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 Malloc9 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.670 23:19:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 Malloc10 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.670 23:19:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 Malloc11 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:20:50.670 23:19:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:50.670 23:19:56 -- common/autotest_common.sh@10 -- # set +x 00:20:50.670 23:19:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:50.670 23:19:56 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:50.670 23:19:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.670 23:19:56 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:51.604 23:19:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:51.604 23:19:57 -- common/autotest_common.sh@1177 -- # local i=0 00:20:51.604 23:19:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:51.604 23:19:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:51.604 23:19:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:54.129 23:19:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:54.129 23:19:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:54.129 23:19:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:20:54.129 23:19:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:54.129 23:19:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.129 23:19:59 -- common/autotest_common.sh@1187 -- # return 0 00:20:54.129 23:19:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.129 23:19:59 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:54.693 23:20:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:54.693 23:20:00 -- common/autotest_common.sh@1177 -- # local i=0 00:20:54.693 23:20:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:54.693 23:20:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:54.693 23:20:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:56.591 23:20:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:56.591 23:20:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:56.591 23:20:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:20:56.591 23:20:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:56.591 23:20:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:56.591 23:20:02 -- common/autotest_common.sh@1187 -- # return 0 00:20:56.591 23:20:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:56.591 23:20:02 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:57.961 23:20:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:57.961 23:20:03 -- common/autotest_common.sh@1177 -- # local i=0 00:20:57.961 23:20:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:57.961 23:20:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:57.961 23:20:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:59.858 23:20:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:59.858 23:20:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:59.858 23:20:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:20:59.858 23:20:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:59.858 23:20:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:59.858 23:20:05 -- common/autotest_common.sh@1187 -- # return 0 00:20:59.858 23:20:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:59.858 23:20:05 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:00.791 23:20:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:00.791 23:20:06 -- common/autotest_common.sh@1177 -- # local i=0 00:21:00.791 23:20:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:00.791 23:20:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:00.791 23:20:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:02.689 23:20:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:02.689 23:20:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:02.689 23:20:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:21:02.689 23:20:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:02.689 23:20:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:02.689 23:20:08 -- common/autotest_common.sh@1187 -- # return 0 00:21:02.689 23:20:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.689 23:20:08 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:03.622 23:20:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:03.622 23:20:09 -- common/autotest_common.sh@1177 -- # local i=0 00:21:03.622 23:20:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:03.622 23:20:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:03.622 23:20:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:06.148 23:20:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:06.148 23:20:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:06.148 23:20:11 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:21:06.148 23:20:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:06.148 23:20:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.148 23:20:11 -- common/autotest_common.sh@1187 -- # return 0 00:21:06.148 23:20:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:06.148 23:20:11 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:06.713 23:20:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:06.713 23:20:12 -- common/autotest_common.sh@1177 -- # local i=0 00:21:06.713 23:20:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:06.713 23:20:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:06.713 23:20:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:08.611 23:20:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:08.611 23:20:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:08.611 23:20:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:21:08.869 23:20:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:08.869 23:20:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:08.869 23:20:14 -- common/autotest_common.sh@1187 -- # return 0 00:21:08.869 23:20:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:08.869 23:20:14 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:09.802 23:20:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:09.802 23:20:15 -- common/autotest_common.sh@1177 -- # local i=0 00:21:09.802 23:20:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:09.802 23:20:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:09.802 23:20:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:11.700 23:20:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:11.700 23:20:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:11.700 23:20:17 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:21:11.700 23:20:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:11.700 23:20:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:11.700 23:20:17 -- common/autotest_common.sh@1187 -- # return 0 00:21:11.700 23:20:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:11.700 23:20:17 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:12.634 23:20:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:12.634 23:20:18 -- common/autotest_common.sh@1177 -- # local i=0 00:21:12.634 23:20:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:12.634 23:20:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:12.634 23:20:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:15.204 23:20:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:15.204 23:20:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:15.204 23:20:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:21:15.204 23:20:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:15.204 23:20:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.204 23:20:20 -- common/autotest_common.sh@1187 -- # return 0 00:21:15.204 23:20:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.204 23:20:20 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:15.773 23:20:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:15.773 23:20:21 -- common/autotest_common.sh@1177 -- # local i=0 00:21:15.773 23:20:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:15.774 23:20:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:15.774 23:20:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:17.671 23:20:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:17.671 23:20:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:17.671 23:20:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:21:17.671 23:20:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:17.671 23:20:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.671 23:20:23 -- common/autotest_common.sh@1187 -- # return 0 00:21:17.671 23:20:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:17.671 23:20:23 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:19.045 23:20:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:19.045 23:20:24 -- common/autotest_common.sh@1177 -- # local i=0 00:21:19.045 23:20:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:19.045 23:20:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:19.045 23:20:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:20.942 23:20:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:20.942 23:20:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:20.942 23:20:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:21:20.942 23:20:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:20.942 23:20:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:20.942 23:20:26 -- common/autotest_common.sh@1187 -- # return 0 00:21:20.942 23:20:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:20.942 23:20:26 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:21:21.874 23:20:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:21.874 23:20:27 -- common/autotest_common.sh@1177 -- # local i=0 00:21:21.874 23:20:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:21.874 23:20:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:21.874 23:20:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:23.772 23:20:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:23.772 23:20:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:23.772 23:20:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:21:23.772 23:20:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:23.772 23:20:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:23.772 23:20:29 -- common/autotest_common.sh@1187 -- # return 0 00:21:23.772 23:20:29 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:23.772 [global] 00:21:23.772 thread=1 00:21:23.772 invalidate=1 00:21:23.772 rw=read 00:21:23.772 time_based=1 00:21:23.772 runtime=10 00:21:23.772 ioengine=libaio 00:21:23.772 direct=1 00:21:23.772 bs=262144 00:21:23.772 iodepth=64 00:21:23.772 norandommap=1 00:21:23.772 numjobs=1 00:21:23.772 00:21:23.772 [job0] 00:21:23.772 filename=/dev/nvme0n1 00:21:23.772 [job1] 00:21:23.772 filename=/dev/nvme10n1 00:21:23.772 [job2] 00:21:23.772 filename=/dev/nvme1n1 00:21:23.772 [job3] 00:21:23.772 filename=/dev/nvme2n1 00:21:23.772 [job4] 00:21:23.772 filename=/dev/nvme3n1 00:21:23.772 [job5] 00:21:23.772 filename=/dev/nvme4n1 00:21:23.772 [job6] 00:21:23.772 filename=/dev/nvme5n1 00:21:23.772 [job7] 00:21:23.772 filename=/dev/nvme6n1 00:21:23.772 [job8] 00:21:23.772 filename=/dev/nvme7n1 00:21:23.772 [job9] 00:21:23.772 filename=/dev/nvme8n1 00:21:23.772 [job10] 00:21:23.772 filename=/dev/nvme9n1 00:21:24.033 Could not set queue depth (nvme0n1) 00:21:24.033 Could not set queue depth (nvme10n1) 00:21:24.033 Could not set queue depth (nvme1n1) 00:21:24.033 Could not set queue depth (nvme2n1) 00:21:24.033 Could not set queue depth (nvme3n1) 00:21:24.033 Could not set queue depth (nvme4n1) 00:21:24.033 Could not set queue depth (nvme5n1) 00:21:24.034 Could not set queue depth (nvme6n1) 00:21:24.034 Could not set queue depth (nvme7n1) 00:21:24.034 Could not set queue depth (nvme8n1) 00:21:24.034 Could not set queue depth (nvme9n1) 00:21:24.293 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.293 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.294 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.294 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:24.294 fio-3.35 00:21:24.294 Starting 11 threads 00:21:36.488 00:21:36.488 job0: (groupid=0, jobs=1): err= 0: pid=672339: Sat Nov 2 23:20:40 2024 00:21:36.488 read: IOPS=1105, BW=276MiB/s (290MB/s)(2775MiB/10041msec) 00:21:36.488 slat (usec): min=12, max=19549, avg=893.78, stdev=2284.16 00:21:36.488 clat (usec): min=244, max=83117, avg=56941.60, stdev=4935.29 00:21:36.488 lat (usec): min=291, max=83147, avg=57835.38, stdev=5387.30 00:21:36.488 clat percentiles (usec): 00:21:36.488 | 1.00th=[41681], 5.00th=[54264], 10.00th=[55313], 20.00th=[55837], 00:21:36.488 | 30.00th=[56361], 40.00th=[56361], 50.00th=[56886], 60.00th=[57410], 00:21:36.488 | 70.00th=[57934], 80.00th=[58459], 90.00th=[60556], 95.00th=[62129], 00:21:36.488 | 99.00th=[68682], 99.50th=[70779], 99.90th=[78119], 99.95th=[79168], 00:21:36.488 | 99.99th=[81265] 00:21:36.488 bw ( KiB/s): min=274432, max=316928, per=8.34%, avg=282493.30, stdev=8701.39, samples=20 00:21:36.488 iops : min= 1072, max= 1238, avg=1103.45, stdev=33.99, samples=20 00:21:36.488 lat (usec) : 250=0.01%, 500=0.05%, 1000=0.01% 00:21:36.488 lat (msec) : 2=0.10%, 20=0.30%, 50=2.79%, 100=96.75% 00:21:36.488 cpu : usr=0.40%, sys=5.15%, ctx=2149, majf=0, minf=3659 00:21:36.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.488 issued rwts: total=11099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.488 job1: (groupid=0, jobs=1): err= 0: pid=672340: Sat Nov 2 23:20:40 2024 00:21:36.488 read: IOPS=1450, BW=363MiB/s (380MB/s)(3648MiB/10061msec) 00:21:36.488 slat (usec): min=12, max=32968, avg=681.85, stdev=1820.66 00:21:36.488 clat (msec): min=10, max=132, avg=43.40, stdev= 5.09 00:21:36.488 lat (msec): min=11, max=132, avg=44.08, stdev= 5.37 00:21:36.488 clat percentiles (msec): 00:21:36.488 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:21:36.488 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:21:36.488 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 48], 00:21:36.488 | 99.00th=[ 71], 99.50th=[ 75], 99.90th=[ 124], 99.95th=[ 126], 00:21:36.488 | 99.99th=[ 133] 00:21:36.488 bw ( KiB/s): min=310784, max=381952, per=10.98%, avg=371916.40, stdev=15145.05, samples=20 00:21:36.488 iops : min= 1214, max= 1492, avg=1452.75, stdev=59.16, samples=20 00:21:36.488 lat (msec) : 20=0.32%, 50=96.92%, 100=2.66%, 250=0.10% 00:21:36.488 cpu : usr=0.50%, sys=5.92%, ctx=2733, majf=0, minf=4097 00:21:36.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:36.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.488 issued rwts: total=14591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.488 job2: (groupid=0, jobs=1): err= 0: pid=672342: Sat Nov 2 23:20:40 2024 00:21:36.488 read: IOPS=1102, BW=276MiB/s (289MB/s)(2767MiB/10039msec) 00:21:36.488 slat (usec): min=13, max=19974, avg=899.49, stdev=2436.86 00:21:36.488 clat (usec): min=11774, max=88172, avg=57079.79, stdev=4304.93 00:21:36.488 lat (usec): min=12028, max=88199, avg=57979.29, stdev=4850.95 00:21:36.488 clat percentiles (usec): 00:21:36.488 | 1.00th=[42206], 5.00th=[54264], 10.00th=[55313], 20.00th=[55837], 00:21:36.488 | 30.00th=[56361], 40.00th=[56361], 50.00th=[56886], 60.00th=[57410], 00:21:36.488 | 70.00th=[57934], 80.00th=[58459], 90.00th=[60556], 95.00th=[62129], 00:21:36.488 | 99.00th=[68682], 99.50th=[70779], 99.90th=[77071], 99.95th=[79168], 00:21:36.488 | 99.99th=[80217] 00:21:36.488 bw ( KiB/s): min=268800, max=307712, per=8.32%, avg=281753.40, stdev=7678.82, samples=20 00:21:36.488 iops : min= 1050, max= 1202, avg=1100.55, stdev=30.00, samples=20 00:21:36.488 lat (msec) : 20=0.28%, 50=2.58%, 100=97.14% 00:21:36.488 cpu : usr=0.45%, sys=4.86%, ctx=2095, majf=0, minf=4097 00:21:36.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.488 issued rwts: total=11069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.488 job3: (groupid=0, jobs=1): err= 0: pid=672348: Sat Nov 2 23:20:40 2024 00:21:36.488 read: IOPS=1451, BW=363MiB/s (380MB/s)(3649MiB/10059msec) 00:21:36.488 slat (usec): min=12, max=23911, avg=682.32, stdev=1806.33 00:21:36.488 clat (msec): min=11, max=115, avg=43.38, stdev= 5.18 00:21:36.488 lat (msec): min=11, max=115, avg=44.06, stdev= 5.44 00:21:36.488 clat percentiles (msec): 00:21:36.488 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:21:36.488 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:21:36.488 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:21:36.488 | 99.00th=[ 71], 99.50th=[ 75], 99.90th=[ 111], 99.95th=[ 113], 00:21:36.488 | 99.99th=[ 115] 00:21:36.488 bw ( KiB/s): min=297472, max=381952, per=10.98%, avg=371982.30, stdev=18024.04, samples=20 00:21:36.488 iops : min= 1162, max= 1492, avg=1453.05, stdev=70.41, samples=20 00:21:36.488 lat (msec) : 20=0.31%, 50=96.92%, 100=2.58%, 250=0.19% 00:21:36.488 cpu : usr=0.55%, sys=5.71%, ctx=2733, majf=0, minf=4097 00:21:36.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:36.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.488 issued rwts: total=14596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.488 job4: (groupid=0, jobs=1): err= 0: pid=672350: Sat Nov 2 23:20:40 2024 00:21:36.488 read: IOPS=1448, BW=362MiB/s (380MB/s)(3643MiB/10059msec) 00:21:36.488 slat (usec): min=12, max=20617, avg=682.76, stdev=1739.71 00:21:36.488 clat (msec): min=10, max=129, avg=43.45, stdev= 5.50 00:21:36.488 lat (msec): min=11, max=129, avg=44.13, stdev= 5.73 00:21:36.488 clat percentiles (msec): 00:21:36.488 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:21:36.488 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:21:36.488 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 48], 00:21:36.488 | 99.00th=[ 71], 99.50th=[ 77], 99.90th=[ 121], 99.95th=[ 123], 00:21:36.488 | 99.99th=[ 125] 00:21:36.488 bw ( KiB/s): min=297472, max=380416, per=10.96%, avg=371377.85, stdev=17758.35, samples=20 00:21:36.488 iops : min= 1162, max= 1486, avg=1450.65, stdev=69.38, samples=20 00:21:36.488 lat (msec) : 20=0.30%, 50=97.10%, 100=2.37%, 250=0.24% 00:21:36.488 cpu : usr=0.43%, sys=6.18%, ctx=2666, majf=0, minf=4097 00:21:36.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:36.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.489 issued rwts: total=14570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.489 job5: (groupid=0, jobs=1): err= 0: pid=672351: Sat Nov 2 23:20:40 2024 00:21:36.489 read: IOPS=1124, BW=281MiB/s (295MB/s)(2822MiB/10038msec) 00:21:36.489 slat (usec): min=14, max=18712, avg=881.91, stdev=2404.91 00:21:36.489 clat (usec): min=12425, max=79517, avg=55971.85, stdev=4114.56 00:21:36.489 lat (usec): min=12676, max=79560, avg=56853.75, stdev=4661.37 00:21:36.489 clat percentiles (usec): 00:21:36.489 | 1.00th=[41157], 5.00th=[53740], 10.00th=[54264], 20.00th=[54789], 00:21:36.489 | 30.00th=[54789], 40.00th=[55313], 50.00th=[55837], 60.00th=[55837], 00:21:36.489 | 70.00th=[56361], 80.00th=[57410], 90.00th=[58983], 95.00th=[61080], 00:21:36.489 | 99.00th=[68682], 99.50th=[70779], 99.90th=[77071], 99.95th=[79168], 00:21:36.489 | 99.99th=[79168] 00:21:36.489 bw ( KiB/s): min=276521, max=302592, per=8.48%, avg=287387.65, stdev=5264.55, samples=20 00:21:36.489 iops : min= 1080, max= 1182, avg=1122.60, stdev=20.58, samples=20 00:21:36.489 lat (msec) : 20=0.22%, 50=2.39%, 100=97.39% 00:21:36.489 cpu : usr=0.42%, sys=4.84%, ctx=2090, majf=0, minf=4097 00:21:36.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.489 issued rwts: total=11288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.489 job6: (groupid=0, jobs=1): err= 0: pid=672352: Sat Nov 2 23:20:40 2024 00:21:36.489 read: IOPS=1103, BW=276MiB/s (289MB/s)(2770MiB/10040msec) 00:21:36.489 slat (usec): min=12, max=28351, avg=899.28, stdev=2754.65 00:21:36.489 clat (usec): min=12026, max=86255, avg=57039.24, stdev=4402.87 00:21:36.489 lat (usec): min=12278, max=86713, avg=57938.52, stdev=5117.99 00:21:36.489 clat percentiles (usec): 00:21:36.489 | 1.00th=[42206], 5.00th=[54264], 10.00th=[55313], 20.00th=[55837], 00:21:36.489 | 30.00th=[56361], 40.00th=[56361], 50.00th=[56886], 60.00th=[57410], 00:21:36.489 | 70.00th=[57934], 80.00th=[58459], 90.00th=[60031], 95.00th=[61604], 00:21:36.489 | 99.00th=[71828], 99.50th=[76022], 99.90th=[81265], 99.95th=[82314], 00:21:36.489 | 99.99th=[83362] 00:21:36.489 bw ( KiB/s): min=273408, max=302080, per=8.33%, avg=281983.10, stdev=6406.16, samples=20 00:21:36.489 iops : min= 1068, max= 1180, avg=1101.45, stdev=25.01, samples=20 00:21:36.489 lat (msec) : 20=0.26%, 50=2.65%, 100=97.08% 00:21:36.489 cpu : usr=0.43%, sys=4.74%, ctx=2087, majf=0, minf=4097 00:21:36.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.489 issued rwts: total=11078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.489 job7: (groupid=0, jobs=1): err= 0: pid=672353: Sat Nov 2 23:20:40 2024 00:21:36.489 read: IOPS=1112, BW=278MiB/s (292MB/s)(2797MiB/10058msec) 00:21:36.489 slat (usec): min=13, max=22225, avg=878.59, stdev=2218.96 00:21:36.489 clat (msec): min=12, max=128, avg=56.60, stdev= 5.01 00:21:36.489 lat (msec): min=12, max=128, avg=57.47, stdev= 5.40 00:21:36.489 clat percentiles (msec): 00:21:36.489 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 55], 00:21:36.489 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 56], 60.00th=[ 57], 00:21:36.489 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 60], 95.00th=[ 63], 00:21:36.489 | 99.00th=[ 74], 99.50th=[ 81], 99.90th=[ 108], 99.95th=[ 118], 00:21:36.489 | 99.99th=[ 129] 00:21:36.489 bw ( KiB/s): min=254464, max=292352, per=8.41%, avg=284774.40, stdev=8089.63, samples=20 00:21:36.489 iops : min= 994, max= 1142, avg=1112.40, stdev=31.60, samples=20 00:21:36.489 lat (msec) : 20=0.29%, 50=0.42%, 100=99.07%, 250=0.22% 00:21:36.489 cpu : usr=0.56%, sys=5.00%, ctx=2162, majf=0, minf=4097 00:21:36.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.489 issued rwts: total=11187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.489 job8: (groupid=0, jobs=1): err= 0: pid=672354: Sat Nov 2 23:20:40 2024 00:21:36.489 read: IOPS=1125, BW=281MiB/s (295MB/s)(2826MiB/10040msec) 00:21:36.489 slat (usec): min=13, max=17470, avg=872.96, stdev=2257.60 00:21:36.489 clat (usec): min=11532, max=80779, avg=55909.20, stdev=4152.15 00:21:36.489 lat (usec): min=11784, max=80830, avg=56782.17, stdev=4641.95 00:21:36.489 clat percentiles (usec): 00:21:36.489 | 1.00th=[41157], 5.00th=[53740], 10.00th=[54264], 20.00th=[54789], 00:21:36.489 | 30.00th=[54789], 40.00th=[55313], 50.00th=[55837], 60.00th=[55837], 00:21:36.489 | 70.00th=[56361], 80.00th=[57410], 90.00th=[59507], 95.00th=[61080], 00:21:36.489 | 99.00th=[67634], 99.50th=[69731], 99.90th=[73925], 99.95th=[76022], 00:21:36.489 | 99.99th=[80217] 00:21:36.489 bw ( KiB/s): min=282164, max=307200, per=8.50%, avg=287768.90, stdev=5047.03, samples=20 00:21:36.489 iops : min= 1102, max= 1200, avg=1124.05, stdev=19.72, samples=20 00:21:36.489 lat (msec) : 20=0.31%, 50=2.57%, 100=97.12% 00:21:36.489 cpu : usr=0.44%, sys=5.18%, ctx=2175, majf=0, minf=4097 00:21:36.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.489 issued rwts: total=11304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.489 job9: (groupid=0, jobs=1): err= 0: pid=672357: Sat Nov 2 23:20:40 2024 00:21:36.489 read: IOPS=1126, BW=282MiB/s (295MB/s)(2826MiB/10037msec) 00:21:36.489 slat (usec): min=12, max=18567, avg=880.38, stdev=2255.36 00:21:36.489 clat (usec): min=12388, max=79840, avg=55889.93, stdev=3996.40 00:21:36.489 lat (usec): min=12646, max=79868, avg=56770.31, stdev=4505.30 00:21:36.489 clat percentiles (usec): 00:21:36.489 | 1.00th=[41157], 5.00th=[53740], 10.00th=[54264], 20.00th=[54789], 00:21:36.489 | 30.00th=[54789], 40.00th=[55313], 50.00th=[55837], 60.00th=[55837], 00:21:36.489 | 70.00th=[56361], 80.00th=[57410], 90.00th=[58983], 95.00th=[60556], 00:21:36.489 | 99.00th=[66323], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:21:36.489 | 99.99th=[73925] 00:21:36.489 bw ( KiB/s): min=281088, max=314880, per=8.50%, avg=287769.60, stdev=6872.16, samples=20 00:21:36.489 iops : min= 1098, max= 1230, avg=1124.10, stdev=26.84, samples=20 00:21:36.489 lat (msec) : 20=0.26%, 50=2.61%, 100=97.13% 00:21:36.489 cpu : usr=0.52%, sys=4.94%, ctx=2133, majf=0, minf=4097 00:21:36.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.489 issued rwts: total=11304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.489 job10: (groupid=0, jobs=1): err= 0: pid=672358: Sat Nov 2 23:20:40 2024 00:21:36.489 read: IOPS=1096, BW=274MiB/s (287MB/s)(2757MiB/10059msec) 00:21:36.489 slat (usec): min=13, max=20222, avg=897.78, stdev=2336.35 00:21:36.489 clat (usec): min=1303, max=128123, avg=57420.36, stdev=6251.55 00:21:36.489 lat (usec): min=1396, max=128183, avg=58318.13, stdev=6652.32 00:21:36.489 clat percentiles (msec): 00:21:36.489 | 1.00th=[ 41], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 56], 00:21:36.489 | 30.00th=[ 57], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:36.489 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 63], 00:21:36.489 | 99.00th=[ 73], 99.50th=[ 77], 99.90th=[ 110], 99.95th=[ 110], 00:21:36.489 | 99.99th=[ 129] 00:21:36.489 bw ( KiB/s): min=273408, max=285696, per=8.29%, avg=280705.80, stdev=3531.71, samples=20 00:21:36.489 iops : min= 1068, max= 1116, avg=1096.50, stdev=13.81, samples=20 00:21:36.489 lat (msec) : 2=0.08%, 4=0.46%, 20=0.24%, 50=0.42%, 100=98.59% 00:21:36.489 lat (msec) : 250=0.22% 00:21:36.489 cpu : usr=0.46%, sys=5.12%, ctx=2165, majf=0, minf=4097 00:21:36.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:36.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:36.489 issued rwts: total=11027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:36.489 00:21:36.489 Run status group 0 (all jobs): 00:21:36.489 READ: bw=3308MiB/s (3468MB/s), 274MiB/s-363MiB/s (287MB/s-380MB/s), io=32.5GiB (34.9GB), run=10037-10061msec 00:21:36.489 00:21:36.489 Disk stats (read/write): 00:21:36.489 nvme0n1: ios=21754/0, merge=0/0, ticks=1223789/0, in_queue=1223789, util=96.84% 00:21:36.489 nvme10n1: ios=28842/0, merge=0/0, ticks=1218718/0, in_queue=1218718, util=97.09% 00:21:36.489 nvme1n1: ios=21716/0, merge=0/0, ticks=1223713/0, in_queue=1223713, util=97.41% 00:21:36.489 nvme2n1: ios=28893/0, merge=0/0, ticks=1221736/0, in_queue=1221736, util=97.61% 00:21:36.489 nvme3n1: ios=28852/0, merge=0/0, ticks=1219904/0, in_queue=1219904, util=97.71% 00:21:36.489 nvme4n1: ios=22158/0, merge=0/0, ticks=1224142/0, in_queue=1224142, util=98.10% 00:21:36.489 nvme5n1: ios=21747/0, merge=0/0, ticks=1225758/0, in_queue=1225758, util=98.31% 00:21:36.489 nvme6n1: ios=22082/0, merge=0/0, ticks=1222015/0, in_queue=1222015, util=98.42% 00:21:36.489 nvme7n1: ios=22182/0, merge=0/0, ticks=1224799/0, in_queue=1224799, util=98.88% 00:21:36.489 nvme8n1: ios=22161/0, merge=0/0, ticks=1224177/0, in_queue=1224177, util=99.07% 00:21:36.489 nvme9n1: ios=21736/0, merge=0/0, ticks=1222129/0, in_queue=1222129, util=99.23% 00:21:36.489 23:20:40 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:36.489 [global] 00:21:36.489 thread=1 00:21:36.489 invalidate=1 00:21:36.489 rw=randwrite 00:21:36.489 time_based=1 00:21:36.489 runtime=10 00:21:36.489 ioengine=libaio 00:21:36.489 direct=1 00:21:36.489 bs=262144 00:21:36.489 iodepth=64 00:21:36.489 norandommap=1 00:21:36.489 numjobs=1 00:21:36.489 00:21:36.489 [job0] 00:21:36.489 filename=/dev/nvme0n1 00:21:36.489 [job1] 00:21:36.489 filename=/dev/nvme10n1 00:21:36.489 [job2] 00:21:36.489 filename=/dev/nvme1n1 00:21:36.489 [job3] 00:21:36.489 filename=/dev/nvme2n1 00:21:36.489 [job4] 00:21:36.489 filename=/dev/nvme3n1 00:21:36.489 [job5] 00:21:36.489 filename=/dev/nvme4n1 00:21:36.489 [job6] 00:21:36.489 filename=/dev/nvme5n1 00:21:36.489 [job7] 00:21:36.490 filename=/dev/nvme6n1 00:21:36.490 [job8] 00:21:36.490 filename=/dev/nvme7n1 00:21:36.490 [job9] 00:21:36.490 filename=/dev/nvme8n1 00:21:36.490 [job10] 00:21:36.490 filename=/dev/nvme9n1 00:21:36.490 Could not set queue depth (nvme0n1) 00:21:36.490 Could not set queue depth (nvme10n1) 00:21:36.490 Could not set queue depth (nvme1n1) 00:21:36.490 Could not set queue depth (nvme2n1) 00:21:36.490 Could not set queue depth (nvme3n1) 00:21:36.490 Could not set queue depth (nvme4n1) 00:21:36.490 Could not set queue depth (nvme5n1) 00:21:36.490 Could not set queue depth (nvme6n1) 00:21:36.490 Could not set queue depth (nvme7n1) 00:21:36.490 Could not set queue depth (nvme8n1) 00:21:36.490 Could not set queue depth (nvme9n1) 00:21:36.490 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:36.490 fio-3.35 00:21:36.490 Starting 11 threads 00:21:46.469 00:21:46.469 job0: (groupid=0, jobs=1): err= 0: pid=674101: Sat Nov 2 23:20:51 2024 00:21:46.469 write: IOPS=961, BW=240MiB/s (252MB/s)(2416MiB/10053msec); 0 zone resets 00:21:46.469 slat (usec): min=21, max=35686, avg=1004.62, stdev=1925.17 00:21:46.469 clat (msec): min=4, max=125, avg=65.55, stdev=14.42 00:21:46.469 lat (msec): min=4, max=125, avg=66.56, stdev=14.63 00:21:46.469 clat percentiles (msec): 00:21:46.469 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 55], 00:21:46.469 | 30.00th=[ 59], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:46.469 | 70.00th=[ 74], 80.00th=[ 75], 90.00th=[ 78], 95.00th=[ 81], 00:21:46.469 | 99.00th=[ 88], 99.50th=[ 92], 99.90th=[ 113], 99.95th=[ 123], 00:21:46.469 | 99.99th=[ 126] 00:21:46.469 bw ( KiB/s): min=200192, max=389120, per=7.01%, avg=245760.00, stdev=43028.85, samples=20 00:21:46.469 iops : min= 782, max= 1520, avg=960.00, stdev=168.08, samples=20 00:21:46.469 lat (msec) : 10=0.27%, 20=0.79%, 50=10.61%, 100=88.12%, 250=0.22% 00:21:46.469 cpu : usr=1.92%, sys=3.31%, ctx=2440, majf=0, minf=13 00:21:46.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:46.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.469 issued rwts: total=0,9663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.469 job1: (groupid=0, jobs=1): err= 0: pid=674114: Sat Nov 2 23:20:51 2024 00:21:46.469 write: IOPS=976, BW=244MiB/s (256MB/s)(2457MiB/10062msec); 0 zone resets 00:21:46.469 slat (usec): min=21, max=38837, avg=988.01, stdev=2107.81 00:21:46.469 clat (msec): min=9, max=149, avg=64.50, stdev=24.95 00:21:46.469 lat (msec): min=9, max=149, avg=65.49, stdev=25.34 00:21:46.469 clat percentiles (msec): 00:21:46.469 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 36], 00:21:46.469 | 30.00th=[ 37], 40.00th=[ 65], 50.00th=[ 74], 60.00th=[ 80], 00:21:46.469 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:21:46.469 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 138], 99.95th=[ 142], 00:21:46.469 | 99.99th=[ 150] 00:21:46.469 bw ( KiB/s): min=174080, max=460288, per=7.13%, avg=250043.90, stdev=96482.07, samples=20 00:21:46.469 iops : min= 680, max= 1798, avg=976.70, stdev=376.85, samples=20 00:21:46.469 lat (msec) : 10=0.04%, 20=0.43%, 50=39.14%, 100=59.96%, 250=0.44% 00:21:46.469 cpu : usr=2.21%, sys=3.12%, ctx=2506, majf=0, minf=270 00:21:46.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:46.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.469 issued rwts: total=0,9829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.469 job2: (groupid=0, jobs=1): err= 0: pid=674115: Sat Nov 2 23:20:51 2024 00:21:46.469 write: IOPS=930, BW=233MiB/s (244MB/s)(2338MiB/10052msec); 0 zone resets 00:21:46.469 slat (usec): min=23, max=24398, avg=1049.88, stdev=1905.81 00:21:46.469 clat (usec): min=943, max=122617, avg=67711.28, stdev=12251.74 00:21:46.469 lat (usec): min=985, max=122680, avg=68761.16, stdev=12377.22 00:21:46.469 clat percentiles (msec): 00:21:46.469 | 1.00th=[ 6], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 57], 00:21:46.469 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:46.469 | 70.00th=[ 74], 80.00th=[ 75], 90.00th=[ 78], 95.00th=[ 82], 00:21:46.469 | 99.00th=[ 89], 99.50th=[ 92], 99.90th=[ 113], 99.95th=[ 120], 00:21:46.469 | 99.99th=[ 123] 00:21:46.469 bw ( KiB/s): min=200704, max=296529, per=6.79%, avg=237853.65, stdev=30627.16, samples=20 00:21:46.469 iops : min= 784, max= 1158, avg=929.10, stdev=119.61, samples=20 00:21:46.469 lat (usec) : 1000=0.02% 00:21:46.469 lat (msec) : 2=0.27%, 4=0.48%, 10=0.63%, 20=0.09%, 50=0.86% 00:21:46.469 lat (msec) : 100=97.37%, 250=0.29% 00:21:46.469 cpu : usr=2.23%, sys=4.46%, ctx=2392, majf=0, minf=12 00:21:46.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:46.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.469 issued rwts: total=0,9353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.469 job3: (groupid=0, jobs=1): err= 0: pid=674116: Sat Nov 2 23:20:51 2024 00:21:46.469 write: IOPS=918, BW=230MiB/s (241MB/s)(2307MiB/10052msec); 0 zone resets 00:21:46.469 slat (usec): min=28, max=19297, avg=1078.04, stdev=1918.76 00:21:46.469 clat (msec): min=12, max=125, avg=68.61, stdev= 9.67 00:21:46.469 lat (msec): min=12, max=125, avg=69.69, stdev= 9.77 00:21:46.469 clat percentiles (msec): 00:21:46.469 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 57], 00:21:46.469 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:46.469 | 70.00th=[ 74], 80.00th=[ 75], 90.00th=[ 79], 95.00th=[ 82], 00:21:46.469 | 99.00th=[ 88], 99.50th=[ 90], 99.90th=[ 111], 99.95th=[ 120], 00:21:46.469 | 99.99th=[ 126] 00:21:46.469 bw ( KiB/s): min=200704, max=296529, per=6.70%, avg=234653.65, stdev=29255.63, samples=20 00:21:46.469 iops : min= 784, max= 1158, avg=916.60, stdev=114.24, samples=20 00:21:46.469 lat (msec) : 20=0.10%, 50=0.92%, 100=98.73%, 250=0.25% 00:21:46.469 cpu : usr=2.10%, sys=4.48%, ctx=2292, majf=0, minf=72 00:21:46.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:46.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.469 issued rwts: total=0,9228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.469 job4: (groupid=0, jobs=1): err= 0: pid=674117: Sat Nov 2 23:20:51 2024 00:21:46.469 write: IOPS=833, BW=208MiB/s (218MB/s)(2097MiB/10064msec); 0 zone resets 00:21:46.469 slat (usec): min=26, max=10976, avg=1186.87, stdev=2150.86 00:21:46.469 clat (msec): min=2, max=154, avg=75.59, stdev=15.49 00:21:46.469 lat (msec): min=2, max=154, avg=76.78, stdev=15.70 00:21:46.469 clat percentiles (msec): 00:21:46.469 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:21:46.469 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 84], 00:21:46.469 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 93], 95.00th=[ 94], 00:21:46.469 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 142], 99.95th=[ 146], 00:21:46.469 | 99.99th=[ 155] 00:21:46.469 bw ( KiB/s): min=171520, max=296448, per=6.08%, avg=213068.80, stdev=40424.04, samples=20 00:21:46.469 iops : min= 670, max= 1158, avg=832.30, stdev=157.91, samples=20 00:21:46.469 lat (msec) : 4=0.02%, 10=0.10%, 20=0.16%, 50=0.61%, 100=98.64% 00:21:46.469 lat (msec) : 250=0.48% 00:21:46.469 cpu : usr=2.31%, sys=3.75%, ctx=2090, majf=0, minf=267 00:21:46.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:46.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.469 issued rwts: total=0,8386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.469 job5: (groupid=0, jobs=1): err= 0: pid=674118: Sat Nov 2 23:20:51 2024 00:21:46.469 write: IOPS=831, BW=208MiB/s (218MB/s)(2092MiB/10063msec); 0 zone resets 00:21:46.469 slat (usec): min=30, max=11551, avg=1189.40, stdev=2140.79 00:21:46.469 clat (msec): min=14, max=149, avg=75.76, stdev=15.24 00:21:46.469 lat (msec): min=14, max=149, avg=76.95, stdev=15.44 00:21:46.470 clat percentiles (msec): 00:21:46.470 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:21:46.470 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 84], 00:21:46.470 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 93], 95.00th=[ 95], 00:21:46.470 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 138], 99.95th=[ 146], 00:21:46.470 | 99.99th=[ 150] 00:21:46.470 bw ( KiB/s): min=172544, max=295424, per=6.07%, avg=212556.80, stdev=40130.47, samples=20 00:21:46.470 iops : min= 674, max= 1154, avg=830.30, stdev=156.76, samples=20 00:21:46.470 lat (msec) : 20=0.10%, 50=0.54%, 100=98.88%, 250=0.49% 00:21:46.470 cpu : usr=2.04%, sys=4.03%, ctx=2083, majf=0, minf=205 00:21:46.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:46.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.470 issued rwts: total=0,8366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.470 job6: (groupid=0, jobs=1): err= 0: pid=674119: Sat Nov 2 23:20:51 2024 00:21:46.470 write: IOPS=2636, BW=659MiB/s (691MB/s)(6600MiB/10012msec); 0 zone resets 00:21:46.470 slat (usec): min=17, max=6626, avg=376.47, stdev=787.06 00:21:46.470 clat (usec): min=10303, max=62326, avg=23889.38, stdev=11715.94 00:21:46.470 lat (usec): min=10343, max=63615, avg=24265.86, stdev=11890.37 00:21:46.470 clat percentiles (usec): 00:21:46.470 | 1.00th=[14877], 5.00th=[15533], 10.00th=[15926], 20.00th=[16909], 00:21:46.470 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:21:46.470 | 70.00th=[19268], 80.00th=[36439], 90.00th=[38011], 95.00th=[54264], 00:21:46.470 | 99.00th=[57934], 99.50th=[58459], 99.90th=[60031], 99.95th=[61080], 00:21:46.470 | 99.99th=[62129] 00:21:46.470 bw ( KiB/s): min=289792, max=1021440, per=18.92%, avg=663174.74, stdev=267517.85, samples=19 00:21:46.470 iops : min= 1132, max= 3990, avg=2590.53, stdev=1044.99, samples=19 00:21:46.470 lat (msec) : 20=71.84%, 50=20.94%, 100=7.22% 00:21:46.470 cpu : usr=4.23%, sys=6.65%, ctx=5457, majf=0, minf=137 00:21:46.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:46.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.470 issued rwts: total=0,26398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.470 job7: (groupid=0, jobs=1): err= 0: pid=674120: Sat Nov 2 23:20:51 2024 00:21:46.470 write: IOPS=835, BW=209MiB/s (219MB/s)(2102MiB/10062msec); 0 zone resets 00:21:46.470 slat (usec): min=23, max=11406, avg=1179.88, stdev=2137.14 00:21:46.470 clat (msec): min=16, max=147, avg=75.39, stdev=15.17 00:21:46.470 lat (msec): min=16, max=147, avg=76.57, stdev=15.38 00:21:46.470 clat percentiles (msec): 00:21:46.470 | 1.00th=[ 44], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:21:46.470 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 84], 00:21:46.470 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:21:46.470 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 136], 99.95th=[ 146], 00:21:46.470 | 99.99th=[ 148] 00:21:46.470 bw ( KiB/s): min=175104, max=297472, per=6.10%, avg=213628.70, stdev=40547.47, samples=20 00:21:46.470 iops : min= 684, max= 1162, avg=834.45, stdev=158.38, samples=20 00:21:46.470 lat (msec) : 20=0.06%, 50=1.45%, 100=98.03%, 250=0.46% 00:21:46.470 cpu : usr=1.74%, sys=2.86%, ctx=2092, majf=0, minf=203 00:21:46.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:46.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.470 issued rwts: total=0,8407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.470 job8: (groupid=0, jobs=1): err= 0: pid=674127: Sat Nov 2 23:20:51 2024 00:21:46.470 write: IOPS=1024, BW=256MiB/s (269MB/s)(2578MiB/10061msec); 0 zone resets 00:21:46.470 slat (usec): min=18, max=37184, avg=938.85, stdev=1964.80 00:21:46.470 clat (msec): min=6, max=150, avg=61.49, stdev=24.43 00:21:46.470 lat (msec): min=6, max=150, avg=62.43, stdev=24.80 00:21:46.470 clat percentiles (msec): 00:21:46.470 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:21:46.470 | 30.00th=[ 38], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 77], 00:21:46.470 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:21:46.470 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 136], 99.95th=[ 146], 00:21:46.470 | 99.99th=[ 150] 00:21:46.470 bw ( KiB/s): min=172544, max=446464, per=7.49%, avg=262371.40, stdev=98615.47, samples=20 00:21:46.470 iops : min= 674, max= 1744, avg=1024.85, stdev=385.23, samples=20 00:21:46.470 lat (msec) : 10=0.27%, 20=1.17%, 50=37.89%, 100=60.09%, 250=0.57% 00:21:46.470 cpu : usr=2.29%, sys=4.01%, ctx=2445, majf=0, minf=115 00:21:46.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:46.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.470 issued rwts: total=0,10311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.470 job9: (groupid=0, jobs=1): err= 0: pid=674136: Sat Nov 2 23:20:51 2024 00:21:46.470 write: IOPS=922, BW=231MiB/s (242MB/s)(2317MiB/10052msec); 0 zone resets 00:21:46.470 slat (usec): min=24, max=15393, avg=1041.96, stdev=1898.09 00:21:46.470 clat (msec): min=12, max=125, avg=68.34, stdev= 9.76 00:21:46.470 lat (msec): min=12, max=125, avg=69.39, stdev= 9.89 00:21:46.470 clat percentiles (msec): 00:21:46.470 | 1.00th=[ 45], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 57], 00:21:46.470 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:46.470 | 70.00th=[ 74], 80.00th=[ 75], 90.00th=[ 78], 95.00th=[ 80], 00:21:46.470 | 99.00th=[ 88], 99.50th=[ 91], 99.90th=[ 116], 99.95th=[ 120], 00:21:46.470 | 99.99th=[ 126] 00:21:46.470 bw ( KiB/s): min=201216, max=298068, per=6.72%, avg=235677.80, stdev=28869.13, samples=20 00:21:46.470 iops : min= 786, max= 1164, avg=920.60, stdev=112.73, samples=20 00:21:46.470 lat (msec) : 20=0.10%, 50=1.73%, 100=97.93%, 250=0.25% 00:21:46.470 cpu : usr=2.24%, sys=3.34%, ctx=2387, majf=0, minf=14 00:21:46.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:46.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.470 issued rwts: total=0,9268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.470 job10: (groupid=0, jobs=1): err= 0: pid=674142: Sat Nov 2 23:20:51 2024 00:21:46.470 write: IOPS=2851, BW=713MiB/s (748MB/s)(7141MiB/10016msec); 0 zone resets 00:21:46.470 slat (usec): min=15, max=38107, avg=341.87, stdev=883.99 00:21:46.470 clat (usec): min=753, max=102172, avg=22092.64, stdev=12612.74 00:21:46.470 lat (usec): min=807, max=105596, avg=22434.51, stdev=12798.51 00:21:46.470 clat percentiles (msec): 00:21:46.470 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:21:46.470 | 30.00th=[ 18], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 19], 00:21:46.470 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 36], 95.00th=[ 38], 00:21:46.470 | 99.00th=[ 82], 99.50th=[ 85], 99.90th=[ 92], 99.95th=[ 95], 00:21:46.470 | 99.99th=[ 103] 00:21:46.470 bw ( KiB/s): min=216064, max=920064, per=20.82%, avg=729683.10, stdev=234585.53, samples=20 00:21:46.470 iops : min= 844, max= 3594, avg=2850.30, stdev=916.37, samples=20 00:21:46.470 lat (usec) : 1000=0.03% 00:21:46.470 lat (msec) : 2=0.18%, 4=0.24%, 10=0.70%, 20=80.56%, 50=14.77% 00:21:46.470 lat (msec) : 100=3.50%, 250=0.01% 00:21:46.470 cpu : usr=4.07%, sys=5.92%, ctx=6511, majf=0, minf=90 00:21:46.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:46.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:46.470 issued rwts: total=0,28564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:46.470 00:21:46.470 Run status group 0 (all jobs): 00:21:46.470 WRITE: bw=3422MiB/s (3589MB/s), 208MiB/s-713MiB/s (218MB/s-748MB/s), io=33.6GiB (36.1GB), run=10012-10064msec 00:21:46.470 00:21:46.470 Disk stats (read/write): 00:21:46.470 nvme0n1: ios=49/18966, merge=0/0, ticks=24/1217244, in_queue=1217268, util=96.58% 00:21:46.470 nvme10n1: ios=0/19340, merge=0/0, ticks=0/1210620, in_queue=1210620, util=96.72% 00:21:46.470 nvme1n1: ios=0/18343, merge=0/0, ticks=0/1216505, in_queue=1216505, util=97.06% 00:21:46.470 nvme2n1: ios=0/18094, merge=0/0, ticks=0/1212895, in_queue=1212895, util=97.24% 00:21:46.470 nvme3n1: ios=0/16462, merge=0/0, ticks=0/1208629, in_queue=1208629, util=97.37% 00:21:46.470 nvme4n1: ios=0/16417, merge=0/0, ticks=0/1208149, in_queue=1208149, util=97.76% 00:21:46.470 nvme5n1: ios=0/51695, merge=0/0, ticks=0/1224026, in_queue=1224026, util=97.93% 00:21:46.470 nvme6n1: ios=0/16498, merge=0/0, ticks=0/1210566, in_queue=1210566, util=98.08% 00:21:46.470 nvme7n1: ios=0/20308, merge=0/0, ticks=0/1208114, in_queue=1208114, util=98.57% 00:21:46.470 nvme8n1: ios=0/18176, merge=0/0, ticks=0/1215829, in_queue=1215829, util=98.80% 00:21:46.470 nvme9n1: ios=0/56017, merge=0/0, ticks=0/1217279, in_queue=1217279, util=98.96% 00:21:46.470 23:20:51 -- target/multiconnection.sh@36 -- # sync 00:21:46.470 23:20:51 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:46.470 23:20:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.470 23:20:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:47.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:47.036 23:20:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:47.036 23:20:52 -- common/autotest_common.sh@1198 -- # local i=0 00:21:47.036 23:20:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:47.036 23:20:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:21:47.036 23:20:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:47.036 23:20:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:21:47.036 23:20:52 -- common/autotest_common.sh@1210 -- # return 0 00:21:47.036 23:20:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.036 23:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.036 23:20:52 -- common/autotest_common.sh@10 -- # set +x 00:21:47.036 23:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.036 23:20:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.036 23:20:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:47.968 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:47.968 23:20:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:47.968 23:20:53 -- common/autotest_common.sh@1198 -- # local i=0 00:21:47.968 23:20:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:47.968 23:20:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:21:47.968 23:20:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:21:47.968 23:20:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:47.968 23:20:53 -- common/autotest_common.sh@1210 -- # return 0 00:21:47.968 23:20:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:47.969 23:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.969 23:20:53 -- common/autotest_common.sh@10 -- # set +x 00:21:47.969 23:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.969 23:20:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.969 23:20:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:48.901 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:48.901 23:20:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:48.901 23:20:54 -- common/autotest_common.sh@1198 -- # local i=0 00:21:48.901 23:20:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:48.901 23:20:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:21:48.901 23:20:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:21:48.901 23:20:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:48.901 23:20:54 -- common/autotest_common.sh@1210 -- # return 0 00:21:48.901 23:20:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:48.901 23:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.901 23:20:54 -- common/autotest_common.sh@10 -- # set +x 00:21:48.901 23:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.901 23:20:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.901 23:20:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:49.833 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:50.091 23:20:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:50.091 23:20:55 -- common/autotest_common.sh@1198 -- # local i=0 00:21:50.091 23:20:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:50.091 23:20:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:21:50.091 23:20:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:50.091 23:20:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:21:50.091 23:20:55 -- common/autotest_common.sh@1210 -- # return 0 00:21:50.091 23:20:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:50.091 23:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.091 23:20:55 -- common/autotest_common.sh@10 -- # set +x 00:21:50.091 23:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.091 23:20:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.091 23:20:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:51.024 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:51.024 23:20:56 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:51.024 23:20:56 -- common/autotest_common.sh@1198 -- # local i=0 00:21:51.024 23:20:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:51.024 23:20:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:21:51.024 23:20:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:21:51.024 23:20:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:51.024 23:20:56 -- common/autotest_common.sh@1210 -- # return 0 00:21:51.024 23:20:56 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:51.024 23:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.024 23:20:56 -- common/autotest_common.sh@10 -- # set +x 00:21:51.024 23:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.024 23:20:56 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.024 23:20:56 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:51.957 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:51.957 23:20:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:51.957 23:20:57 -- common/autotest_common.sh@1198 -- # local i=0 00:21:51.957 23:20:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:51.957 23:20:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:21:51.957 23:20:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:51.957 23:20:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:21:51.957 23:20:57 -- common/autotest_common.sh@1210 -- # return 0 00:21:51.957 23:20:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:51.957 23:20:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.957 23:20:57 -- common/autotest_common.sh@10 -- # set +x 00:21:51.957 23:20:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.957 23:20:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.957 23:20:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:52.887 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:52.887 23:20:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:52.887 23:20:58 -- common/autotest_common.sh@1198 -- # local i=0 00:21:52.887 23:20:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:52.887 23:20:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:21:52.887 23:20:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:52.887 23:20:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:21:52.887 23:20:58 -- common/autotest_common.sh@1210 -- # return 0 00:21:52.887 23:20:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:52.887 23:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.887 23:20:58 -- common/autotest_common.sh@10 -- # set +x 00:21:53.145 23:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.145 23:20:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.145 23:20:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:54.076 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:54.076 23:20:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:54.076 23:20:59 -- common/autotest_common.sh@1198 -- # local i=0 00:21:54.076 23:20:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:54.076 23:20:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:21:54.076 23:20:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:54.076 23:20:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:21:54.076 23:20:59 -- common/autotest_common.sh@1210 -- # return 0 00:21:54.076 23:20:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:54.076 23:20:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:54.076 23:20:59 -- common/autotest_common.sh@10 -- # set +x 00:21:54.076 23:20:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:54.076 23:20:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:54.076 23:20:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:55.012 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:55.012 23:21:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:55.012 23:21:00 -- common/autotest_common.sh@1198 -- # local i=0 00:21:55.012 23:21:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:55.012 23:21:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:21:55.012 23:21:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:55.012 23:21:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:21:55.012 23:21:00 -- common/autotest_common.sh@1210 -- # return 0 00:21:55.012 23:21:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:55.012 23:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.012 23:21:00 -- common/autotest_common.sh@10 -- # set +x 00:21:55.012 23:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.012 23:21:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.012 23:21:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:55.947 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:55.947 23:21:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:55.947 23:21:01 -- common/autotest_common.sh@1198 -- # local i=0 00:21:55.947 23:21:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:55.947 23:21:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:21:55.947 23:21:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:55.947 23:21:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:21:55.947 23:21:01 -- common/autotest_common.sh@1210 -- # return 0 00:21:55.947 23:21:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:55.947 23:21:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.948 23:21:01 -- common/autotest_common.sh@10 -- # set +x 00:21:55.948 23:21:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.948 23:21:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.948 23:21:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:56.883 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:56.883 23:21:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:56.883 23:21:02 -- common/autotest_common.sh@1198 -- # local i=0 00:21:56.883 23:21:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:56.883 23:21:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:21:56.883 23:21:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:56.883 23:21:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:21:56.883 23:21:02 -- common/autotest_common.sh@1210 -- # return 0 00:21:56.883 23:21:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:56.883 23:21:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.883 23:21:02 -- common/autotest_common.sh@10 -- # set +x 00:21:57.142 23:21:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.142 23:21:02 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:57.142 23:21:02 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:57.142 23:21:02 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:57.142 23:21:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:57.142 23:21:02 -- nvmf/common.sh@116 -- # sync 00:21:57.142 23:21:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:57.142 23:21:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:57.142 23:21:02 -- nvmf/common.sh@119 -- # set +e 00:21:57.142 23:21:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:57.142 23:21:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:57.142 rmmod nvme_rdma 00:21:57.142 rmmod nvme_fabrics 00:21:57.142 23:21:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:57.142 23:21:02 -- nvmf/common.sh@123 -- # set -e 00:21:57.142 23:21:02 -- nvmf/common.sh@124 -- # return 0 00:21:57.142 23:21:02 -- nvmf/common.sh@477 -- # '[' -n 666016 ']' 00:21:57.142 23:21:02 -- nvmf/common.sh@478 -- # killprocess 666016 00:21:57.142 23:21:02 -- common/autotest_common.sh@926 -- # '[' -z 666016 ']' 00:21:57.142 23:21:02 -- common/autotest_common.sh@930 -- # kill -0 666016 00:21:57.142 23:21:02 -- common/autotest_common.sh@931 -- # uname 00:21:57.142 23:21:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:57.142 23:21:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 666016 00:21:57.142 23:21:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:57.142 23:21:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:57.142 23:21:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 666016' 00:21:57.142 killing process with pid 666016 00:21:57.142 23:21:02 -- common/autotest_common.sh@945 -- # kill 666016 00:21:57.142 23:21:02 -- common/autotest_common.sh@950 -- # wait 666016 00:21:57.711 23:21:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:57.712 23:21:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:57.712 00:21:57.712 real 1m15.533s 00:21:57.712 user 4m54.259s 00:21:57.712 sys 0m19.374s 00:21:57.712 23:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.712 23:21:03 -- common/autotest_common.sh@10 -- # set +x 00:21:57.712 ************************************ 00:21:57.712 END TEST nvmf_multiconnection 00:21:57.712 ************************************ 00:21:57.712 23:21:03 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:21:57.712 23:21:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:57.712 23:21:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:57.712 23:21:03 -- common/autotest_common.sh@10 -- # set +x 00:21:57.712 ************************************ 00:21:57.712 START TEST nvmf_initiator_timeout 00:21:57.712 ************************************ 00:21:57.712 23:21:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:21:57.712 * Looking for test storage... 00:21:57.712 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:57.712 23:21:03 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.712 23:21:03 -- nvmf/common.sh@7 -- # uname -s 00:21:57.712 23:21:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.712 23:21:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.712 23:21:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.712 23:21:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.712 23:21:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.712 23:21:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.712 23:21:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.712 23:21:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.712 23:21:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.712 23:21:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.712 23:21:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:57.712 23:21:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:57.712 23:21:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.712 23:21:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.712 23:21:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.712 23:21:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:57.712 23:21:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.712 23:21:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.712 23:21:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.712 23:21:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.712 23:21:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.712 23:21:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.712 23:21:03 -- paths/export.sh@5 -- # export PATH 00:21:57.712 23:21:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.712 23:21:03 -- nvmf/common.sh@46 -- # : 0 00:21:57.712 23:21:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:57.712 23:21:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:57.712 23:21:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:57.712 23:21:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.712 23:21:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.712 23:21:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:57.712 23:21:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:57.712 23:21:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:57.712 23:21:03 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.971 23:21:03 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.971 23:21:03 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:57.971 23:21:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:57.971 23:21:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.971 23:21:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:57.971 23:21:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:57.971 23:21:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:57.971 23:21:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.971 23:21:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.971 23:21:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.971 23:21:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:57.971 23:21:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:57.971 23:21:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:57.971 23:21:03 -- common/autotest_common.sh@10 -- # set +x 00:22:04.616 23:21:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:04.616 23:21:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:04.616 23:21:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:04.616 23:21:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:04.616 23:21:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:04.616 23:21:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:04.616 23:21:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:04.616 23:21:10 -- nvmf/common.sh@294 -- # net_devs=() 00:22:04.616 23:21:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:04.616 23:21:10 -- nvmf/common.sh@295 -- # e810=() 00:22:04.616 23:21:10 -- nvmf/common.sh@295 -- # local -ga e810 00:22:04.616 23:21:10 -- nvmf/common.sh@296 -- # x722=() 00:22:04.616 23:21:10 -- nvmf/common.sh@296 -- # local -ga x722 00:22:04.616 23:21:10 -- nvmf/common.sh@297 -- # mlx=() 00:22:04.616 23:21:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:04.616 23:21:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.616 23:21:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:04.616 23:21:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:04.616 23:21:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:04.616 23:21:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:04.616 23:21:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:04.616 23:21:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:04.616 23:21:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:04.616 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:04.616 23:21:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:04.616 23:21:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:04.616 23:21:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:04.616 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:04.616 23:21:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:04.616 23:21:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:04.616 23:21:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:04.616 23:21:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:04.616 23:21:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.616 23:21:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:04.616 23:21:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.616 23:21:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:04.616 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:04.616 23:21:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.616 23:21:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:04.616 23:21:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.616 23:21:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:04.616 23:21:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.616 23:21:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:04.616 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:04.616 23:21:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.616 23:21:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:04.616 23:21:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:04.616 23:21:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:04.617 23:21:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:04.617 23:21:10 -- nvmf/common.sh@57 -- # uname 00:22:04.617 23:21:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:04.617 23:21:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:04.617 23:21:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:04.617 23:21:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:04.617 23:21:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:04.617 23:21:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:04.617 23:21:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:04.617 23:21:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:04.617 23:21:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:04.617 23:21:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:04.617 23:21:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:04.617 23:21:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:04.617 23:21:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:04.617 23:21:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:04.617 23:21:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:04.617 23:21:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:04.617 23:21:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@104 -- # continue 2 00:22:04.617 23:21:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@104 -- # continue 2 00:22:04.617 23:21:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:04.617 23:21:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:04.617 23:21:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:04.617 23:21:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:04.617 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:04.617 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:04.617 altname enp217s0f0np0 00:22:04.617 altname ens818f0np0 00:22:04.617 inet 192.168.100.8/24 scope global mlx_0_0 00:22:04.617 valid_lft forever preferred_lft forever 00:22:04.617 23:21:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:04.617 23:21:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:04.617 23:21:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:04.617 23:21:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:04.617 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:04.617 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:04.617 altname enp217s0f1np1 00:22:04.617 altname ens818f1np1 00:22:04.617 inet 192.168.100.9/24 scope global mlx_0_1 00:22:04.617 valid_lft forever preferred_lft forever 00:22:04.617 23:21:10 -- nvmf/common.sh@410 -- # return 0 00:22:04.617 23:21:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:04.617 23:21:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:04.617 23:21:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:04.617 23:21:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:04.617 23:21:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:04.617 23:21:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:04.617 23:21:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:04.617 23:21:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:04.617 23:21:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:04.617 23:21:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@104 -- # continue 2 00:22:04.617 23:21:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.617 23:21:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:04.617 23:21:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@104 -- # continue 2 00:22:04.617 23:21:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:04.617 23:21:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:04.617 23:21:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:04.617 23:21:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:04.617 23:21:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:04.617 23:21:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:04.617 192.168.100.9' 00:22:04.617 23:21:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:04.617 192.168.100.9' 00:22:04.617 23:21:10 -- nvmf/common.sh@445 -- # head -n 1 00:22:04.617 23:21:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:04.617 23:21:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:04.617 192.168.100.9' 00:22:04.617 23:21:10 -- nvmf/common.sh@446 -- # tail -n +2 00:22:04.617 23:21:10 -- nvmf/common.sh@446 -- # head -n 1 00:22:04.617 23:21:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:04.617 23:21:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:04.617 23:21:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:04.617 23:21:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:04.617 23:21:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:04.617 23:21:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:04.617 23:21:10 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:04.617 23:21:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:04.617 23:21:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:04.617 23:21:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.617 23:21:10 -- nvmf/common.sh@469 -- # nvmfpid=681637 00:22:04.617 23:21:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:04.617 23:21:10 -- nvmf/common.sh@470 -- # waitforlisten 681637 00:22:04.617 23:21:10 -- common/autotest_common.sh@819 -- # '[' -z 681637 ']' 00:22:04.617 23:21:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.617 23:21:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:04.617 23:21:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.617 23:21:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:04.617 23:21:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.617 [2024-11-02 23:21:10.321362] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:04.617 [2024-11-02 23:21:10.321408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.617 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.876 [2024-11-02 23:21:10.390959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.876 [2024-11-02 23:21:10.465208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:04.876 [2024-11-02 23:21:10.465326] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.876 [2024-11-02 23:21:10.465335] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.876 [2024-11-02 23:21:10.465344] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.876 [2024-11-02 23:21:10.465437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.876 [2024-11-02 23:21:10.465533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.876 [2024-11-02 23:21:10.465616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.876 [2024-11-02 23:21:10.465618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.442 23:21:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:05.442 23:21:11 -- common/autotest_common.sh@852 -- # return 0 00:22:05.442 23:21:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:05.442 23:21:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:05.442 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.442 23:21:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.442 23:21:11 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:05.442 23:21:11 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.442 23:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.442 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.700 Malloc0 00:22:05.700 23:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.700 23:21:11 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:05.700 23:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.700 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.700 Delay0 00:22:05.700 23:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.700 23:21:11 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:05.700 23:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.700 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.700 [2024-11-02 23:21:11.246719] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1623e40/0x14921c0) succeed. 00:22:05.700 [2024-11-02 23:21:11.256852] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1625250/0x1512200) succeed. 00:22:05.700 23:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.700 23:21:11 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:05.700 23:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.701 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.701 23:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.701 23:21:11 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:05.701 23:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.701 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.701 23:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.701 23:21:11 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:05.701 23:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.701 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.701 [2024-11-02 23:21:11.397407] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:05.701 23:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.701 23:21:11 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:07.073 23:21:12 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:07.073 23:21:12 -- common/autotest_common.sh@1177 -- # local i=0 00:22:07.073 23:21:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.073 23:21:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:07.073 23:21:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:08.979 23:21:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:08.979 23:21:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:08.979 23:21:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:08.979 23:21:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:08.979 23:21:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.979 23:21:14 -- common/autotest_common.sh@1187 -- # return 0 00:22:08.979 23:21:14 -- target/initiator_timeout.sh@35 -- # fio_pid=682310 00:22:08.979 23:21:14 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:08.979 23:21:14 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:08.979 [global] 00:22:08.979 thread=1 00:22:08.979 invalidate=1 00:22:08.979 rw=write 00:22:08.979 time_based=1 00:22:08.979 runtime=60 00:22:08.979 ioengine=libaio 00:22:08.979 direct=1 00:22:08.979 bs=4096 00:22:08.979 iodepth=1 00:22:08.979 norandommap=0 00:22:08.979 numjobs=1 00:22:08.979 00:22:08.979 verify_dump=1 00:22:08.979 verify_backlog=512 00:22:08.979 verify_state_save=0 00:22:08.979 do_verify=1 00:22:08.979 verify=crc32c-intel 00:22:08.979 [job0] 00:22:08.979 filename=/dev/nvme0n1 00:22:08.979 Could not set queue depth (nvme0n1) 00:22:09.235 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:09.235 fio-3.35 00:22:09.235 Starting 1 thread 00:22:11.756 23:21:17 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:11.756 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.756 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.756 true 00:22:11.756 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.756 23:21:17 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:11.756 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.756 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.756 true 00:22:11.756 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.756 23:21:17 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:11.756 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.757 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.757 true 00:22:11.757 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.757 23:21:17 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:11.757 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.757 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.757 true 00:22:11.757 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.757 23:21:17 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:15.028 23:21:20 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:15.028 23:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.028 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:22:15.028 true 00:22:15.028 23:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.028 23:21:20 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:15.028 23:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.028 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:22:15.028 true 00:22:15.028 23:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.028 23:21:20 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:15.028 23:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.028 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:22:15.028 true 00:22:15.028 23:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.028 23:21:20 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:15.028 23:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.028 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:22:15.028 true 00:22:15.028 23:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.028 23:21:20 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:15.028 23:21:20 -- target/initiator_timeout.sh@54 -- # wait 682310 00:23:11.214 00:23:11.214 job0: (groupid=0, jobs=1): err= 0: pid=682456: Sat Nov 2 23:22:14 2024 00:23:11.214 read: IOPS=1434, BW=5737KiB/s (5875kB/s)(336MiB/60000msec) 00:23:11.214 slat (usec): min=3, max=15898, avg= 5.26, stdev=54.50 00:23:11.214 clat (usec): min=51, max=42324k, avg=589.06, stdev=144280.29 00:23:11.214 lat (usec): min=85, max=42324k, avg=594.31, stdev=144280.32 00:23:11.214 clat percentiles (usec): 00:23:11.214 | 1.00th=[ 86], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 92], 00:23:11.214 | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 98], 00:23:11.214 | 70.00th=[ 100], 80.00th=[ 102], 90.00th=[ 104], 95.00th=[ 108], 00:23:11.214 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 239], 99.95th=[ 265], 00:23:11.214 | 99.99th=[ 334] 00:23:11.214 write: IOPS=1442, BW=5769KiB/s (5907kB/s)(338MiB/60000msec); 0 zone resets 00:23:11.214 slat (usec): min=4, max=264, avg= 5.96, stdev= 2.94 00:23:11.214 clat (usec): min=70, max=414, avg=94.04, stdev= 9.92 00:23:11.214 lat (usec): min=82, max=425, avg=100.00, stdev=11.30 00:23:11.214 clat percentiles (usec): 00:23:11.214 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:23:11.214 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 95], 00:23:11.214 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 102], 95.00th=[ 105], 00:23:11.214 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 243], 99.95th=[ 262], 00:23:11.214 | 99.99th=[ 338] 00:23:11.214 bw ( KiB/s): min= 3912, max=20480, per=100.00%, avg=18716.00, stdev=3784.37, samples=36 00:23:11.214 iops : min= 978, max= 5120, avg=4679.00, stdev=946.09, samples=36 00:23:11.214 lat (usec) : 100=78.86%, 250=21.06%, 500=0.07% 00:23:11.214 lat (msec) : 10=0.01%, >=2000=0.01% 00:23:11.214 cpu : usr=0.81%, sys=1.58%, ctx=172584, majf=0, minf=131 00:23:11.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:11.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.214 issued rwts: total=86053,86528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:11.214 00:23:11.214 Run status group 0 (all jobs): 00:23:11.214 READ: bw=5737KiB/s (5875kB/s), 5737KiB/s-5737KiB/s (5875kB/s-5875kB/s), io=336MiB (352MB), run=60000-60000msec 00:23:11.214 WRITE: bw=5769KiB/s (5907kB/s), 5769KiB/s-5769KiB/s (5907kB/s-5907kB/s), io=338MiB (354MB), run=60000-60000msec 00:23:11.214 00:23:11.214 Disk stats (read/write): 00:23:11.214 nvme0n1: ios=85877/86016, merge=0/0, ticks=8164/7817, in_queue=15981, util=99.92% 00:23:11.214 23:22:14 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:11.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:11.214 23:22:15 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:11.214 23:22:15 -- common/autotest_common.sh@1198 -- # local i=0 00:23:11.214 23:22:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:11.214 23:22:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:11.214 23:22:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:11.214 23:22:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:11.214 23:22:15 -- common/autotest_common.sh@1210 -- # return 0 00:23:11.214 23:22:15 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:11.214 23:22:15 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:11.214 nvmf hotplug test: fio successful as expected 00:23:11.214 23:22:15 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.214 23:22:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.214 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 23:22:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.214 23:22:15 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:11.214 23:22:15 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:11.214 23:22:15 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:11.214 23:22:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:11.214 23:22:15 -- nvmf/common.sh@116 -- # sync 00:23:11.214 23:22:15 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:11.214 23:22:15 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:11.214 23:22:15 -- nvmf/common.sh@119 -- # set +e 00:23:11.214 23:22:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:11.214 23:22:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:11.214 rmmod nvme_rdma 00:23:11.214 rmmod nvme_fabrics 00:23:11.214 23:22:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:11.214 23:22:16 -- nvmf/common.sh@123 -- # set -e 00:23:11.214 23:22:16 -- nvmf/common.sh@124 -- # return 0 00:23:11.214 23:22:16 -- nvmf/common.sh@477 -- # '[' -n 681637 ']' 00:23:11.214 23:22:16 -- nvmf/common.sh@478 -- # killprocess 681637 00:23:11.214 23:22:16 -- common/autotest_common.sh@926 -- # '[' -z 681637 ']' 00:23:11.214 23:22:16 -- common/autotest_common.sh@930 -- # kill -0 681637 00:23:11.214 23:22:16 -- common/autotest_common.sh@931 -- # uname 00:23:11.214 23:22:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:11.214 23:22:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 681637 00:23:11.214 23:22:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:11.214 23:22:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:11.214 23:22:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 681637' 00:23:11.214 killing process with pid 681637 00:23:11.214 23:22:16 -- common/autotest_common.sh@945 -- # kill 681637 00:23:11.214 23:22:16 -- common/autotest_common.sh@950 -- # wait 681637 00:23:11.214 23:22:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:11.214 23:22:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:11.214 00:23:11.214 real 1m13.056s 00:23:11.214 user 4m33.623s 00:23:11.214 sys 0m7.034s 00:23:11.214 23:22:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.215 23:22:16 -- common/autotest_common.sh@10 -- # set +x 00:23:11.215 ************************************ 00:23:11.215 END TEST nvmf_initiator_timeout 00:23:11.215 ************************************ 00:23:11.215 23:22:16 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:11.215 23:22:16 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:11.215 23:22:16 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:11.215 23:22:16 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:11.215 23:22:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:11.215 23:22:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:11.215 23:22:16 -- common/autotest_common.sh@10 -- # set +x 00:23:11.215 ************************************ 00:23:11.215 START TEST nvmf_shutdown 00:23:11.215 ************************************ 00:23:11.215 23:22:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:11.215 * Looking for test storage... 00:23:11.215 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:11.215 23:22:16 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.215 23:22:16 -- nvmf/common.sh@7 -- # uname -s 00:23:11.215 23:22:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.215 23:22:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.215 23:22:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.215 23:22:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.215 23:22:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.215 23:22:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.215 23:22:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.215 23:22:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.215 23:22:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.215 23:22:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.215 23:22:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:11.215 23:22:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:11.215 23:22:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.215 23:22:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.215 23:22:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.215 23:22:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:11.215 23:22:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.215 23:22:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.215 23:22:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.215 23:22:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.215 23:22:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.215 23:22:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.215 23:22:16 -- paths/export.sh@5 -- # export PATH 00:23:11.215 23:22:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.215 23:22:16 -- nvmf/common.sh@46 -- # : 0 00:23:11.215 23:22:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:11.215 23:22:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:11.215 23:22:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:11.215 23:22:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.215 23:22:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.215 23:22:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:11.215 23:22:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:11.215 23:22:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:11.215 23:22:16 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.215 23:22:16 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.215 23:22:16 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:11.215 23:22:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:11.215 23:22:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:11.215 23:22:16 -- common/autotest_common.sh@10 -- # set +x 00:23:11.215 ************************************ 00:23:11.215 START TEST nvmf_shutdown_tc1 00:23:11.215 ************************************ 00:23:11.215 23:22:16 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:23:11.215 23:22:16 -- target/shutdown.sh@74 -- # starttarget 00:23:11.215 23:22:16 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:11.215 23:22:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:11.215 23:22:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.215 23:22:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:11.215 23:22:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:11.215 23:22:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:11.215 23:22:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.215 23:22:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.215 23:22:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.215 23:22:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:11.215 23:22:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:11.215 23:22:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:11.215 23:22:16 -- common/autotest_common.sh@10 -- # set +x 00:23:17.772 23:22:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:17.772 23:22:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:17.772 23:22:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:17.772 23:22:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:17.772 23:22:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:17.772 23:22:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:17.772 23:22:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:17.772 23:22:23 -- nvmf/common.sh@294 -- # net_devs=() 00:23:17.772 23:22:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:17.772 23:22:23 -- nvmf/common.sh@295 -- # e810=() 00:23:17.772 23:22:23 -- nvmf/common.sh@295 -- # local -ga e810 00:23:17.772 23:22:23 -- nvmf/common.sh@296 -- # x722=() 00:23:17.772 23:22:23 -- nvmf/common.sh@296 -- # local -ga x722 00:23:17.772 23:22:23 -- nvmf/common.sh@297 -- # mlx=() 00:23:17.772 23:22:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:17.772 23:22:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.772 23:22:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:17.772 23:22:23 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:17.772 23:22:23 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:17.772 23:22:23 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:17.772 23:22:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:17.772 23:22:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:17.772 23:22:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:17.772 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:17.772 23:22:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:17.772 23:22:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:17.772 23:22:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:17.772 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:17.772 23:22:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:17.772 23:22:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:17.772 23:22:23 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:17.772 23:22:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.772 23:22:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:17.772 23:22:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.772 23:22:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:17.772 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:17.772 23:22:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.772 23:22:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:17.772 23:22:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.772 23:22:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:17.772 23:22:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.772 23:22:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:17.772 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:17.772 23:22:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.772 23:22:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:17.772 23:22:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:17.772 23:22:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:17.772 23:22:23 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:17.772 23:22:23 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:17.772 23:22:23 -- nvmf/common.sh@57 -- # uname 00:23:17.772 23:22:23 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:17.772 23:22:23 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:17.772 23:22:23 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:17.772 23:22:23 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:17.772 23:22:23 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:17.772 23:22:23 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:17.772 23:22:23 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:17.772 23:22:23 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:17.772 23:22:23 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:17.772 23:22:23 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:17.772 23:22:23 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:17.772 23:22:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:17.772 23:22:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:17.773 23:22:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:17.773 23:22:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:17.773 23:22:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:17.773 23:22:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@104 -- # continue 2 00:23:17.773 23:22:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@104 -- # continue 2 00:23:17.773 23:22:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:17.773 23:22:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:17.773 23:22:23 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:17.773 23:22:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:17.773 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:17.773 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:17.773 altname enp217s0f0np0 00:23:17.773 altname ens818f0np0 00:23:17.773 inet 192.168.100.8/24 scope global mlx_0_0 00:23:17.773 valid_lft forever preferred_lft forever 00:23:17.773 23:22:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:17.773 23:22:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:17.773 23:22:23 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:17.773 23:22:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:17.773 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:17.773 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:17.773 altname enp217s0f1np1 00:23:17.773 altname ens818f1np1 00:23:17.773 inet 192.168.100.9/24 scope global mlx_0_1 00:23:17.773 valid_lft forever preferred_lft forever 00:23:17.773 23:22:23 -- nvmf/common.sh@410 -- # return 0 00:23:17.773 23:22:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:17.773 23:22:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:17.773 23:22:23 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:17.773 23:22:23 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:17.773 23:22:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:17.773 23:22:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:17.773 23:22:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:17.773 23:22:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:17.773 23:22:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:17.773 23:22:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@104 -- # continue 2 00:23:17.773 23:22:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:17.773 23:22:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:17.773 23:22:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@104 -- # continue 2 00:23:17.773 23:22:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:17.773 23:22:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:17.773 23:22:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:17.773 23:22:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:17.773 23:22:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:17.773 23:22:23 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:17.773 192.168.100.9' 00:23:17.773 23:22:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:17.773 192.168.100.9' 00:23:17.773 23:22:23 -- nvmf/common.sh@445 -- # head -n 1 00:23:17.773 23:22:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:17.773 23:22:23 -- nvmf/common.sh@446 -- # tail -n +2 00:23:17.773 23:22:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:17.773 192.168.100.9' 00:23:17.773 23:22:23 -- nvmf/common.sh@446 -- # head -n 1 00:23:17.773 23:22:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:17.773 23:22:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:17.773 23:22:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:17.773 23:22:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:17.773 23:22:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:17.773 23:22:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:17.773 23:22:23 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:17.773 23:22:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:17.773 23:22:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:17.773 23:22:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.773 23:22:23 -- nvmf/common.sh@469 -- # nvmfpid=696216 00:23:17.773 23:22:23 -- nvmf/common.sh@470 -- # waitforlisten 696216 00:23:17.773 23:22:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:17.773 23:22:23 -- common/autotest_common.sh@819 -- # '[' -z 696216 ']' 00:23:17.773 23:22:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.773 23:22:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:17.773 23:22:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.773 23:22:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:17.773 23:22:23 -- common/autotest_common.sh@10 -- # set +x 00:23:18.031 [2024-11-02 23:22:23.534768] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:18.031 [2024-11-02 23:22:23.534816] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.031 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.031 [2024-11-02 23:22:23.603854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.031 [2024-11-02 23:22:23.677717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:18.031 [2024-11-02 23:22:23.677824] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.031 [2024-11-02 23:22:23.677833] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.031 [2024-11-02 23:22:23.677842] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.031 [2024-11-02 23:22:23.677942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.031 [2024-11-02 23:22:23.678024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.031 [2024-11-02 23:22:23.678133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.031 [2024-11-02 23:22:23.678134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.961 23:22:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:18.961 23:22:24 -- common/autotest_common.sh@852 -- # return 0 00:23:18.961 23:22:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:18.961 23:22:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:18.961 23:22:24 -- common/autotest_common.sh@10 -- # set +x 00:23:18.961 23:22:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.962 23:22:24 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:18.962 23:22:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.962 23:22:24 -- common/autotest_common.sh@10 -- # set +x 00:23:18.962 [2024-11-02 23:22:24.438104] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15e1380/0x15e5870) succeed. 00:23:18.962 [2024-11-02 23:22:24.447219] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15e2970/0x1626f10) succeed. 00:23:18.962 23:22:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.962 23:22:24 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.962 23:22:24 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.962 23:22:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:18.962 23:22:24 -- common/autotest_common.sh@10 -- # set +x 00:23:18.962 23:22:24 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.962 23:22:24 -- target/shutdown.sh@28 -- # cat 00:23:18.962 23:22:24 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:18.962 23:22:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.962 23:22:24 -- common/autotest_common.sh@10 -- # set +x 00:23:18.962 Malloc1 00:23:18.962 [2024-11-02 23:22:24.672683] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:18.962 Malloc2 00:23:19.219 Malloc3 00:23:19.219 Malloc4 00:23:19.219 Malloc5 00:23:19.219 Malloc6 00:23:19.219 Malloc7 00:23:19.219 Malloc8 00:23:19.476 Malloc9 00:23:19.476 Malloc10 00:23:19.476 23:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.476 23:22:25 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:19.476 23:22:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:19.476 23:22:25 -- common/autotest_common.sh@10 -- # set +x 00:23:19.476 23:22:25 -- target/shutdown.sh@78 -- # perfpid=696529 00:23:19.476 23:22:25 -- target/shutdown.sh@79 -- # waitforlisten 696529 /var/tmp/bdevperf.sock 00:23:19.476 23:22:25 -- common/autotest_common.sh@819 -- # '[' -z 696529 ']' 00:23:19.476 23:22:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.476 23:22:25 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:19.476 23:22:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:19.476 23:22:25 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.476 23:22:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.476 23:22:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:19.476 23:22:25 -- nvmf/common.sh@520 -- # config=() 00:23:19.476 23:22:25 -- common/autotest_common.sh@10 -- # set +x 00:23:19.476 23:22:25 -- nvmf/common.sh@520 -- # local subsystem config 00:23:19.476 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.476 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.476 { 00:23:19.476 "params": { 00:23:19.476 "name": "Nvme$subsystem", 00:23:19.476 "trtype": "$TEST_TRANSPORT", 00:23:19.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.476 "adrfam": "ipv4", 00:23:19.476 "trsvcid": "$NVMF_PORT", 00:23:19.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.476 "hdgst": ${hdgst:-false}, 00:23:19.476 "ddgst": ${ddgst:-false} 00:23:19.476 }, 00:23:19.476 "method": "bdev_nvme_attach_controller" 00:23:19.476 } 00:23:19.476 EOF 00:23:19.476 )") 00:23:19.476 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.476 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.476 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.476 { 00:23:19.476 "params": { 00:23:19.476 "name": "Nvme$subsystem", 00:23:19.476 "trtype": "$TEST_TRANSPORT", 00:23:19.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.476 "adrfam": "ipv4", 00:23:19.476 "trsvcid": "$NVMF_PORT", 00:23:19.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.476 "hdgst": ${hdgst:-false}, 00:23:19.476 "ddgst": ${ddgst:-false} 00:23:19.476 }, 00:23:19.476 "method": "bdev_nvme_attach_controller" 00:23:19.476 } 00:23:19.476 EOF 00:23:19.476 )") 00:23:19.476 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.476 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.476 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.476 { 00:23:19.476 "params": { 00:23:19.476 "name": "Nvme$subsystem", 00:23:19.476 "trtype": "$TEST_TRANSPORT", 00:23:19.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.476 "adrfam": "ipv4", 00:23:19.476 "trsvcid": "$NVMF_PORT", 00:23:19.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.476 "hdgst": ${hdgst:-false}, 00:23:19.476 "ddgst": ${ddgst:-false} 00:23:19.476 }, 00:23:19.476 "method": "bdev_nvme_attach_controller" 00:23:19.476 } 00:23:19.476 EOF 00:23:19.476 )") 00:23:19.476 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.477 { 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme$subsystem", 00:23:19.477 "trtype": "$TEST_TRANSPORT", 00:23:19.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "$NVMF_PORT", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.477 "hdgst": ${hdgst:-false}, 00:23:19.477 "ddgst": ${ddgst:-false} 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 } 00:23:19.477 EOF 00:23:19.477 )") 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.477 { 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme$subsystem", 00:23:19.477 "trtype": "$TEST_TRANSPORT", 00:23:19.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "$NVMF_PORT", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.477 "hdgst": ${hdgst:-false}, 00:23:19.477 "ddgst": ${ddgst:-false} 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 } 00:23:19.477 EOF 00:23:19.477 )") 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 [2024-11-02 23:22:25.157890] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:19.477 [2024-11-02 23:22:25.157939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:19.477 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.477 { 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme$subsystem", 00:23:19.477 "trtype": "$TEST_TRANSPORT", 00:23:19.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "$NVMF_PORT", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.477 "hdgst": ${hdgst:-false}, 00:23:19.477 "ddgst": ${ddgst:-false} 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 } 00:23:19.477 EOF 00:23:19.477 )") 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.477 { 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme$subsystem", 00:23:19.477 "trtype": "$TEST_TRANSPORT", 00:23:19.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "$NVMF_PORT", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.477 "hdgst": ${hdgst:-false}, 00:23:19.477 "ddgst": ${ddgst:-false} 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 } 00:23:19.477 EOF 00:23:19.477 )") 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.477 { 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme$subsystem", 00:23:19.477 "trtype": "$TEST_TRANSPORT", 00:23:19.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "$NVMF_PORT", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.477 "hdgst": ${hdgst:-false}, 00:23:19.477 "ddgst": ${ddgst:-false} 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 } 00:23:19.477 EOF 00:23:19.477 )") 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.477 { 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme$subsystem", 00:23:19.477 "trtype": "$TEST_TRANSPORT", 00:23:19.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "$NVMF_PORT", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.477 "hdgst": ${hdgst:-false}, 00:23:19.477 "ddgst": ${ddgst:-false} 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 } 00:23:19.477 EOF 00:23:19.477 )") 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.477 23:22:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:19.477 { 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme$subsystem", 00:23:19.477 "trtype": "$TEST_TRANSPORT", 00:23:19.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "$NVMF_PORT", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.477 "hdgst": ${hdgst:-false}, 00:23:19.477 "ddgst": ${ddgst:-false} 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 } 00:23:19.477 EOF 00:23:19.477 )") 00:23:19.477 23:22:25 -- nvmf/common.sh@542 -- # cat 00:23:19.477 23:22:25 -- nvmf/common.sh@544 -- # jq . 00:23:19.477 23:22:25 -- nvmf/common.sh@545 -- # IFS=, 00:23:19.477 23:22:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme1", 00:23:19.477 "trtype": "rdma", 00:23:19.477 "traddr": "192.168.100.8", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "4420", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.477 "hdgst": false, 00:23:19.477 "ddgst": false 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 },{ 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme2", 00:23:19.477 "trtype": "rdma", 00:23:19.477 "traddr": "192.168.100.8", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "4420", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.477 "hdgst": false, 00:23:19.477 "ddgst": false 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 },{ 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme3", 00:23:19.477 "trtype": "rdma", 00:23:19.477 "traddr": "192.168.100.8", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "4420", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:19.477 "hdgst": false, 00:23:19.477 "ddgst": false 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 },{ 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme4", 00:23:19.477 "trtype": "rdma", 00:23:19.477 "traddr": "192.168.100.8", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "4420", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:19.477 "hdgst": false, 00:23:19.477 "ddgst": false 00:23:19.477 }, 00:23:19.477 "method": "bdev_nvme_attach_controller" 00:23:19.477 },{ 00:23:19.477 "params": { 00:23:19.477 "name": "Nvme5", 00:23:19.477 "trtype": "rdma", 00:23:19.477 "traddr": "192.168.100.8", 00:23:19.477 "adrfam": "ipv4", 00:23:19.477 "trsvcid": "4420", 00:23:19.477 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:19.477 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:19.477 "hdgst": false, 00:23:19.477 "ddgst": false 00:23:19.478 }, 00:23:19.478 "method": "bdev_nvme_attach_controller" 00:23:19.478 },{ 00:23:19.478 "params": { 00:23:19.478 "name": "Nvme6", 00:23:19.478 "trtype": "rdma", 00:23:19.478 "traddr": "192.168.100.8", 00:23:19.478 "adrfam": "ipv4", 00:23:19.478 "trsvcid": "4420", 00:23:19.478 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:19.478 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:19.478 "hdgst": false, 00:23:19.478 "ddgst": false 00:23:19.478 }, 00:23:19.478 "method": "bdev_nvme_attach_controller" 00:23:19.478 },{ 00:23:19.478 "params": { 00:23:19.478 "name": "Nvme7", 00:23:19.478 "trtype": "rdma", 00:23:19.478 "traddr": "192.168.100.8", 00:23:19.478 "adrfam": "ipv4", 00:23:19.478 "trsvcid": "4420", 00:23:19.478 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:19.478 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:19.478 "hdgst": false, 00:23:19.478 "ddgst": false 00:23:19.478 }, 00:23:19.478 "method": "bdev_nvme_attach_controller" 00:23:19.478 },{ 00:23:19.478 "params": { 00:23:19.478 "name": "Nvme8", 00:23:19.478 "trtype": "rdma", 00:23:19.478 "traddr": "192.168.100.8", 00:23:19.478 "adrfam": "ipv4", 00:23:19.478 "trsvcid": "4420", 00:23:19.478 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:19.478 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:19.478 "hdgst": false, 00:23:19.478 "ddgst": false 00:23:19.478 }, 00:23:19.478 "method": "bdev_nvme_attach_controller" 00:23:19.478 },{ 00:23:19.478 "params": { 00:23:19.478 "name": "Nvme9", 00:23:19.478 "trtype": "rdma", 00:23:19.478 "traddr": "192.168.100.8", 00:23:19.478 "adrfam": "ipv4", 00:23:19.478 "trsvcid": "4420", 00:23:19.478 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:19.478 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:19.478 "hdgst": false, 00:23:19.478 "ddgst": false 00:23:19.478 }, 00:23:19.478 "method": "bdev_nvme_attach_controller" 00:23:19.478 },{ 00:23:19.478 "params": { 00:23:19.478 "name": "Nvme10", 00:23:19.478 "trtype": "rdma", 00:23:19.478 "traddr": "192.168.100.8", 00:23:19.478 "adrfam": "ipv4", 00:23:19.478 "trsvcid": "4420", 00:23:19.478 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:19.478 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:19.478 "hdgst": false, 00:23:19.478 "ddgst": false 00:23:19.478 }, 00:23:19.478 "method": "bdev_nvme_attach_controller" 00:23:19.478 }' 00:23:19.478 [2024-11-02 23:22:25.231579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.735 [2024-11-02 23:22:25.298680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.103 23:22:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:21.103 23:22:26 -- common/autotest_common.sh@852 -- # return 0 00:23:21.103 23:22:26 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.103 23:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.103 23:22:26 -- common/autotest_common.sh@10 -- # set +x 00:23:21.103 23:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.103 23:22:26 -- target/shutdown.sh@83 -- # kill -9 696529 00:23:21.103 23:22:26 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:21.103 23:22:26 -- target/shutdown.sh@87 -- # sleep 1 00:23:22.033 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 696529 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:22.033 23:22:27 -- target/shutdown.sh@88 -- # kill -0 696216 00:23:22.033 23:22:27 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:22.033 23:22:27 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:22.033 23:22:27 -- nvmf/common.sh@520 -- # config=() 00:23:22.033 23:22:27 -- nvmf/common.sh@520 -- # local subsystem config 00:23:22.033 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.033 { 00:23:22.033 "params": { 00:23:22.033 "name": "Nvme$subsystem", 00:23:22.033 "trtype": "$TEST_TRANSPORT", 00:23:22.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.033 "adrfam": "ipv4", 00:23:22.033 "trsvcid": "$NVMF_PORT", 00:23:22.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.033 "hdgst": ${hdgst:-false}, 00:23:22.033 "ddgst": ${ddgst:-false} 00:23:22.033 }, 00:23:22.033 "method": "bdev_nvme_attach_controller" 00:23:22.033 } 00:23:22.033 EOF 00:23:22.033 )") 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.033 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.033 { 00:23:22.033 "params": { 00:23:22.033 "name": "Nvme$subsystem", 00:23:22.033 "trtype": "$TEST_TRANSPORT", 00:23:22.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.033 "adrfam": "ipv4", 00:23:22.033 "trsvcid": "$NVMF_PORT", 00:23:22.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.033 "hdgst": ${hdgst:-false}, 00:23:22.033 "ddgst": ${ddgst:-false} 00:23:22.033 }, 00:23:22.033 "method": "bdev_nvme_attach_controller" 00:23:22.033 } 00:23:22.033 EOF 00:23:22.033 )") 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.033 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.033 { 00:23:22.033 "params": { 00:23:22.033 "name": "Nvme$subsystem", 00:23:22.033 "trtype": "$TEST_TRANSPORT", 00:23:22.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.033 "adrfam": "ipv4", 00:23:22.033 "trsvcid": "$NVMF_PORT", 00:23:22.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.033 "hdgst": ${hdgst:-false}, 00:23:22.033 "ddgst": ${ddgst:-false} 00:23:22.033 }, 00:23:22.033 "method": "bdev_nvme_attach_controller" 00:23:22.033 } 00:23:22.033 EOF 00:23:22.033 )") 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.033 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.033 { 00:23:22.033 "params": { 00:23:22.033 "name": "Nvme$subsystem", 00:23:22.033 "trtype": "$TEST_TRANSPORT", 00:23:22.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.033 "adrfam": "ipv4", 00:23:22.033 "trsvcid": "$NVMF_PORT", 00:23:22.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.033 "hdgst": ${hdgst:-false}, 00:23:22.033 "ddgst": ${ddgst:-false} 00:23:22.033 }, 00:23:22.033 "method": "bdev_nvme_attach_controller" 00:23:22.033 } 00:23:22.033 EOF 00:23:22.033 )") 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.033 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.033 { 00:23:22.033 "params": { 00:23:22.033 "name": "Nvme$subsystem", 00:23:22.033 "trtype": "$TEST_TRANSPORT", 00:23:22.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.033 "adrfam": "ipv4", 00:23:22.033 "trsvcid": "$NVMF_PORT", 00:23:22.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.033 "hdgst": ${hdgst:-false}, 00:23:22.033 "ddgst": ${ddgst:-false} 00:23:22.033 }, 00:23:22.033 "method": "bdev_nvme_attach_controller" 00:23:22.033 } 00:23:22.033 EOF 00:23:22.033 )") 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.033 [2024-11-02 23:22:27.717308] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:22.033 [2024-11-02 23:22:27.717356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696942 ] 00:23:22.033 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.033 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.033 { 00:23:22.033 "params": { 00:23:22.033 "name": "Nvme$subsystem", 00:23:22.033 "trtype": "$TEST_TRANSPORT", 00:23:22.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.033 "adrfam": "ipv4", 00:23:22.033 "trsvcid": "$NVMF_PORT", 00:23:22.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.033 "hdgst": ${hdgst:-false}, 00:23:22.033 "ddgst": ${ddgst:-false} 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 } 00:23:22.034 EOF 00:23:22.034 )") 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.034 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.034 { 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme$subsystem", 00:23:22.034 "trtype": "$TEST_TRANSPORT", 00:23:22.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "$NVMF_PORT", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.034 "hdgst": ${hdgst:-false}, 00:23:22.034 "ddgst": ${ddgst:-false} 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 } 00:23:22.034 EOF 00:23:22.034 )") 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.034 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.034 { 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme$subsystem", 00:23:22.034 "trtype": "$TEST_TRANSPORT", 00:23:22.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "$NVMF_PORT", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.034 "hdgst": ${hdgst:-false}, 00:23:22.034 "ddgst": ${ddgst:-false} 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 } 00:23:22.034 EOF 00:23:22.034 )") 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.034 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.034 { 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme$subsystem", 00:23:22.034 "trtype": "$TEST_TRANSPORT", 00:23:22.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "$NVMF_PORT", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.034 "hdgst": ${hdgst:-false}, 00:23:22.034 "ddgst": ${ddgst:-false} 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 } 00:23:22.034 EOF 00:23:22.034 )") 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.034 23:22:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:22.034 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:22.034 { 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme$subsystem", 00:23:22.034 "trtype": "$TEST_TRANSPORT", 00:23:22.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "$NVMF_PORT", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.034 "hdgst": ${hdgst:-false}, 00:23:22.034 "ddgst": ${ddgst:-false} 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 } 00:23:22.034 EOF 00:23:22.034 )") 00:23:22.034 23:22:27 -- nvmf/common.sh@542 -- # cat 00:23:22.034 23:22:27 -- nvmf/common.sh@544 -- # jq . 00:23:22.034 23:22:27 -- nvmf/common.sh@545 -- # IFS=, 00:23:22.034 23:22:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme1", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme2", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme3", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme4", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme5", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme6", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme7", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme8", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme9", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 },{ 00:23:22.034 "params": { 00:23:22.034 "name": "Nvme10", 00:23:22.034 "trtype": "rdma", 00:23:22.034 "traddr": "192.168.100.8", 00:23:22.034 "adrfam": "ipv4", 00:23:22.034 "trsvcid": "4420", 00:23:22.034 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:22.034 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:22.034 "hdgst": false, 00:23:22.034 "ddgst": false 00:23:22.034 }, 00:23:22.034 "method": "bdev_nvme_attach_controller" 00:23:22.034 }' 00:23:22.291 [2024-11-02 23:22:27.790052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.291 [2024-11-02 23:22:27.858425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.221 Running I/O for 1 seconds... 00:23:24.220 00:23:24.220 Latency(us) 00:23:24.220 [2024-11-02T22:22:29.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme1n1 : 1.11 729.66 45.60 0.00 0.00 86651.34 7811.89 120795.96 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme2n1 : 1.11 742.49 46.41 0.00 0.00 84471.47 8074.04 76755.76 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme3n1 : 1.11 745.40 46.59 0.00 0.00 83629.91 8283.75 75078.04 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme4n1 : 1.11 746.51 46.66 0.00 0.00 83042.39 8493.47 72142.03 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme5n1 : 1.11 740.45 46.28 0.00 0.00 83253.22 8650.75 70044.88 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme6n1 : 1.12 739.78 46.24 0.00 0.00 82862.89 8860.47 70044.88 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme7n1 : 1.12 739.09 46.19 0.00 0.00 82461.64 9070.18 72142.03 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme8n1 : 1.12 738.42 46.15 0.00 0.00 82031.28 9279.90 74658.61 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme9n1 : 1.12 737.75 46.11 0.00 0.00 81610.07 9489.61 76755.76 00:23:24.220 [2024-11-02T22:22:29.977Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.220 Verification LBA range: start 0x0 length 0x400 00:23:24.220 Nvme10n1 : 1.12 545.04 34.07 0.00 0.00 109624.84 8074.04 335544.32 00:23:24.220 [2024-11-02T22:22:29.977Z] =================================================================================================================== 00:23:24.220 [2024-11-02T22:22:29.977Z] Total : 7204.60 450.29 0.00 0.00 85325.76 7811.89 335544.32 00:23:24.478 23:22:30 -- target/shutdown.sh@93 -- # stoptarget 00:23:24.478 23:22:30 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:24.478 23:22:30 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:24.478 23:22:30 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.478 23:22:30 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:24.478 23:22:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:24.478 23:22:30 -- nvmf/common.sh@116 -- # sync 00:23:24.478 23:22:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:24.478 23:22:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:24.478 23:22:30 -- nvmf/common.sh@119 -- # set +e 00:23:24.478 23:22:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:24.478 23:22:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:24.478 rmmod nvme_rdma 00:23:24.478 rmmod nvme_fabrics 00:23:24.478 23:22:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:24.478 23:22:30 -- nvmf/common.sh@123 -- # set -e 00:23:24.478 23:22:30 -- nvmf/common.sh@124 -- # return 0 00:23:24.478 23:22:30 -- nvmf/common.sh@477 -- # '[' -n 696216 ']' 00:23:24.478 23:22:30 -- nvmf/common.sh@478 -- # killprocess 696216 00:23:24.735 23:22:30 -- common/autotest_common.sh@926 -- # '[' -z 696216 ']' 00:23:24.736 23:22:30 -- common/autotest_common.sh@930 -- # kill -0 696216 00:23:24.736 23:22:30 -- common/autotest_common.sh@931 -- # uname 00:23:24.736 23:22:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:24.736 23:22:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 696216 00:23:24.736 23:22:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:24.736 23:22:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:24.736 23:22:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 696216' 00:23:24.736 killing process with pid 696216 00:23:24.736 23:22:30 -- common/autotest_common.sh@945 -- # kill 696216 00:23:24.736 23:22:30 -- common/autotest_common.sh@950 -- # wait 696216 00:23:25.303 23:22:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:25.303 23:22:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:25.303 00:23:25.303 real 0m14.197s 00:23:25.303 user 0m33.601s 00:23:25.303 sys 0m6.492s 00:23:25.303 23:22:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.303 23:22:30 -- common/autotest_common.sh@10 -- # set +x 00:23:25.303 ************************************ 00:23:25.303 END TEST nvmf_shutdown_tc1 00:23:25.303 ************************************ 00:23:25.303 23:22:30 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:25.303 23:22:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:25.303 23:22:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:25.303 23:22:30 -- common/autotest_common.sh@10 -- # set +x 00:23:25.303 ************************************ 00:23:25.303 START TEST nvmf_shutdown_tc2 00:23:25.303 ************************************ 00:23:25.303 23:22:30 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:23:25.303 23:22:30 -- target/shutdown.sh@98 -- # starttarget 00:23:25.303 23:22:30 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:25.303 23:22:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:25.303 23:22:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.303 23:22:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:25.303 23:22:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:25.303 23:22:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:25.303 23:22:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.303 23:22:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.303 23:22:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.303 23:22:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:25.303 23:22:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:25.303 23:22:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:25.303 23:22:30 -- common/autotest_common.sh@10 -- # set +x 00:23:25.303 23:22:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.303 23:22:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:25.303 23:22:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:25.303 23:22:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:25.303 23:22:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:25.303 23:22:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:25.303 23:22:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:25.303 23:22:30 -- nvmf/common.sh@294 -- # net_devs=() 00:23:25.303 23:22:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:25.303 23:22:30 -- nvmf/common.sh@295 -- # e810=() 00:23:25.303 23:22:30 -- nvmf/common.sh@295 -- # local -ga e810 00:23:25.303 23:22:30 -- nvmf/common.sh@296 -- # x722=() 00:23:25.303 23:22:30 -- nvmf/common.sh@296 -- # local -ga x722 00:23:25.303 23:22:30 -- nvmf/common.sh@297 -- # mlx=() 00:23:25.303 23:22:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:25.303 23:22:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.303 23:22:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.303 23:22:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.303 23:22:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.303 23:22:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.303 23:22:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.304 23:22:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.304 23:22:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.304 23:22:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.304 23:22:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.304 23:22:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.304 23:22:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:25.304 23:22:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:25.304 23:22:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:25.304 23:22:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:25.304 23:22:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:25.304 23:22:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:25.304 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:25.304 23:22:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:25.304 23:22:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:25.304 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:25.304 23:22:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:25.304 23:22:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:25.304 23:22:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.304 23:22:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.304 23:22:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.304 23:22:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:25.304 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:25.304 23:22:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.304 23:22:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.304 23:22:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.304 23:22:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.304 23:22:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:25.304 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:25.304 23:22:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.304 23:22:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:25.304 23:22:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:25.304 23:22:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:25.304 23:22:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:25.304 23:22:30 -- nvmf/common.sh@57 -- # uname 00:23:25.304 23:22:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:25.304 23:22:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:25.304 23:22:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:25.304 23:22:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:25.304 23:22:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:25.304 23:22:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:25.304 23:22:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:25.304 23:22:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:25.304 23:22:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:25.304 23:22:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:25.304 23:22:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:25.304 23:22:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:25.304 23:22:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:25.304 23:22:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:25.304 23:22:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:25.304 23:22:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:25.304 23:22:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:25.304 23:22:30 -- nvmf/common.sh@104 -- # continue 2 00:23:25.304 23:22:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.304 23:22:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:25.304 23:22:30 -- nvmf/common.sh@104 -- # continue 2 00:23:25.304 23:22:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:25.304 23:22:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:25.304 23:22:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:25.304 23:22:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:25.304 23:22:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.304 23:22:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.304 23:22:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:25.304 23:22:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:25.304 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:25.304 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:25.304 altname enp217s0f0np0 00:23:25.304 altname ens818f0np0 00:23:25.304 inet 192.168.100.8/24 scope global mlx_0_0 00:23:25.304 valid_lft forever preferred_lft forever 00:23:25.304 23:22:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:25.304 23:22:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:25.304 23:22:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:25.304 23:22:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:25.304 23:22:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.304 23:22:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.304 23:22:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:25.304 23:22:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:25.304 23:22:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:25.304 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:25.304 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:25.304 altname enp217s0f1np1 00:23:25.304 altname ens818f1np1 00:23:25.304 inet 192.168.100.9/24 scope global mlx_0_1 00:23:25.304 valid_lft forever preferred_lft forever 00:23:25.304 23:22:30 -- nvmf/common.sh@410 -- # return 0 00:23:25.304 23:22:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:25.304 23:22:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:25.304 23:22:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:25.304 23:22:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:25.304 23:22:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:25.304 23:22:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:25.304 23:22:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:25.304 23:22:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:25.304 23:22:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:25.304 23:22:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:25.304 23:22:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.304 23:22:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.304 23:22:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:25.304 23:22:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:25.304 23:22:31 -- nvmf/common.sh@104 -- # continue 2 00:23:25.304 23:22:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.304 23:22:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.304 23:22:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:25.304 23:22:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.304 23:22:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:25.304 23:22:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:25.304 23:22:31 -- nvmf/common.sh@104 -- # continue 2 00:23:25.304 23:22:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:25.304 23:22:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:25.304 23:22:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:25.304 23:22:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.304 23:22:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:25.304 23:22:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.304 23:22:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:25.304 23:22:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:25.304 23:22:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:25.304 23:22:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.304 23:22:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.304 23:22:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:25.304 23:22:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:25.304 192.168.100.9' 00:23:25.304 23:22:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:25.304 192.168.100.9' 00:23:25.304 23:22:31 -- nvmf/common.sh@445 -- # head -n 1 00:23:25.304 23:22:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:25.304 23:22:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:25.304 192.168.100.9' 00:23:25.304 23:22:31 -- nvmf/common.sh@446 -- # tail -n +2 00:23:25.304 23:22:31 -- nvmf/common.sh@446 -- # head -n 1 00:23:25.562 23:22:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:25.562 23:22:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:25.562 23:22:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:25.562 23:22:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:25.562 23:22:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:25.562 23:22:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:25.562 23:22:31 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:25.562 23:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:25.562 23:22:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:25.562 23:22:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.562 23:22:31 -- nvmf/common.sh@469 -- # nvmfpid=697667 00:23:25.562 23:22:31 -- nvmf/common.sh@470 -- # waitforlisten 697667 00:23:25.562 23:22:31 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.562 23:22:31 -- common/autotest_common.sh@819 -- # '[' -z 697667 ']' 00:23:25.562 23:22:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.562 23:22:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:25.562 23:22:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.562 23:22:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:25.562 23:22:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.562 [2024-11-02 23:22:31.132523] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:25.562 [2024-11-02 23:22:31.132576] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.562 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.562 [2024-11-02 23:22:31.203933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.562 [2024-11-02 23:22:31.277798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:25.562 [2024-11-02 23:22:31.277919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.562 [2024-11-02 23:22:31.277930] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.562 [2024-11-02 23:22:31.277939] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.563 [2024-11-02 23:22:31.278036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.563 [2024-11-02 23:22:31.278120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.563 [2024-11-02 23:22:31.278230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.563 [2024-11-02 23:22:31.278231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.494 23:22:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:26.494 23:22:31 -- common/autotest_common.sh@852 -- # return 0 00:23:26.494 23:22:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:26.494 23:22:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:26.494 23:22:31 -- common/autotest_common.sh@10 -- # set +x 00:23:26.494 23:22:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.494 23:22:31 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:26.494 23:22:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.494 23:22:31 -- common/autotest_common.sh@10 -- # set +x 00:23:26.494 [2024-11-02 23:22:32.025232] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1afb380/0x1aff870) succeed. 00:23:26.494 [2024-11-02 23:22:32.034396] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1afc970/0x1b40f10) succeed. 00:23:26.494 23:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.494 23:22:32 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:26.494 23:22:32 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:26.494 23:22:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:26.494 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:23:26.494 23:22:32 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.494 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.494 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.495 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.495 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.495 23:22:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.495 23:22:32 -- target/shutdown.sh@28 -- # cat 00:23:26.495 23:22:32 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:26.495 23:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.495 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:23:26.495 Malloc1 00:23:26.752 [2024-11-02 23:22:32.255828] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:26.752 Malloc2 00:23:26.752 Malloc3 00:23:26.752 Malloc4 00:23:26.752 Malloc5 00:23:26.752 Malloc6 00:23:26.752 Malloc7 00:23:27.010 Malloc8 00:23:27.010 Malloc9 00:23:27.010 Malloc10 00:23:27.010 23:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:27.010 23:22:32 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:27.010 23:22:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:27.010 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:23:27.010 23:22:32 -- target/shutdown.sh@102 -- # perfpid=697996 00:23:27.010 23:22:32 -- target/shutdown.sh@103 -- # waitforlisten 697996 /var/tmp/bdevperf.sock 00:23:27.010 23:22:32 -- common/autotest_common.sh@819 -- # '[' -z 697996 ']' 00:23:27.010 23:22:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.010 23:22:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:27.010 23:22:32 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:27.010 23:22:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.010 23:22:32 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:27.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.010 23:22:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:27.010 23:22:32 -- nvmf/common.sh@520 -- # config=() 00:23:27.010 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:23:27.010 23:22:32 -- nvmf/common.sh@520 -- # local subsystem config 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.010 [2024-11-02 23:22:32.741438] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:27.010 [2024-11-02 23:22:32.741492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697996 ] 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.010 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.010 { 00:23:27.010 "params": { 00:23:27.010 "name": "Nvme$subsystem", 00:23:27.010 "trtype": "$TEST_TRANSPORT", 00:23:27.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.010 "adrfam": "ipv4", 00:23:27.010 "trsvcid": "$NVMF_PORT", 00:23:27.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.010 "hdgst": ${hdgst:-false}, 00:23:27.010 "ddgst": ${ddgst:-false} 00:23:27.010 }, 00:23:27.010 "method": "bdev_nvme_attach_controller" 00:23:27.010 } 00:23:27.010 EOF 00:23:27.010 )") 00:23:27.010 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.268 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.268 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.268 { 00:23:27.268 "params": { 00:23:27.268 "name": "Nvme$subsystem", 00:23:27.268 "trtype": "$TEST_TRANSPORT", 00:23:27.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.268 "adrfam": "ipv4", 00:23:27.268 "trsvcid": "$NVMF_PORT", 00:23:27.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.268 "hdgst": ${hdgst:-false}, 00:23:27.268 "ddgst": ${ddgst:-false} 00:23:27.268 }, 00:23:27.268 "method": "bdev_nvme_attach_controller" 00:23:27.268 } 00:23:27.268 EOF 00:23:27.268 )") 00:23:27.268 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.268 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.268 23:22:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.268 23:22:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.268 { 00:23:27.268 "params": { 00:23:27.268 "name": "Nvme$subsystem", 00:23:27.268 "trtype": "$TEST_TRANSPORT", 00:23:27.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.268 "adrfam": "ipv4", 00:23:27.268 "trsvcid": "$NVMF_PORT", 00:23:27.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.268 "hdgst": ${hdgst:-false}, 00:23:27.268 "ddgst": ${ddgst:-false} 00:23:27.268 }, 00:23:27.268 "method": "bdev_nvme_attach_controller" 00:23:27.268 } 00:23:27.268 EOF 00:23:27.268 )") 00:23:27.268 23:22:32 -- nvmf/common.sh@542 -- # cat 00:23:27.268 23:22:32 -- nvmf/common.sh@544 -- # jq . 00:23:27.268 23:22:32 -- nvmf/common.sh@545 -- # IFS=, 00:23:27.268 23:22:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:27.268 "params": { 00:23:27.268 "name": "Nvme1", 00:23:27.268 "trtype": "rdma", 00:23:27.268 "traddr": "192.168.100.8", 00:23:27.268 "adrfam": "ipv4", 00:23:27.268 "trsvcid": "4420", 00:23:27.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.268 "hdgst": false, 00:23:27.268 "ddgst": false 00:23:27.268 }, 00:23:27.268 "method": "bdev_nvme_attach_controller" 00:23:27.268 },{ 00:23:27.268 "params": { 00:23:27.268 "name": "Nvme2", 00:23:27.268 "trtype": "rdma", 00:23:27.268 "traddr": "192.168.100.8", 00:23:27.268 "adrfam": "ipv4", 00:23:27.268 "trsvcid": "4420", 00:23:27.268 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:27.268 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:27.268 "hdgst": false, 00:23:27.268 "ddgst": false 00:23:27.268 }, 00:23:27.268 "method": "bdev_nvme_attach_controller" 00:23:27.268 },{ 00:23:27.268 "params": { 00:23:27.268 "name": "Nvme3", 00:23:27.268 "trtype": "rdma", 00:23:27.268 "traddr": "192.168.100.8", 00:23:27.268 "adrfam": "ipv4", 00:23:27.268 "trsvcid": "4420", 00:23:27.268 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:27.268 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:27.268 "hdgst": false, 00:23:27.268 "ddgst": false 00:23:27.268 }, 00:23:27.268 "method": "bdev_nvme_attach_controller" 00:23:27.268 },{ 00:23:27.268 "params": { 00:23:27.268 "name": "Nvme4", 00:23:27.268 "trtype": "rdma", 00:23:27.268 "traddr": "192.168.100.8", 00:23:27.268 "adrfam": "ipv4", 00:23:27.268 "trsvcid": "4420", 00:23:27.268 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:27.268 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:27.268 "hdgst": false, 00:23:27.268 "ddgst": false 00:23:27.268 }, 00:23:27.268 "method": "bdev_nvme_attach_controller" 00:23:27.269 },{ 00:23:27.269 "params": { 00:23:27.269 "name": "Nvme5", 00:23:27.269 "trtype": "rdma", 00:23:27.269 "traddr": "192.168.100.8", 00:23:27.269 "adrfam": "ipv4", 00:23:27.269 "trsvcid": "4420", 00:23:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:27.269 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:27.269 "hdgst": false, 00:23:27.269 "ddgst": false 00:23:27.269 }, 00:23:27.269 "method": "bdev_nvme_attach_controller" 00:23:27.269 },{ 00:23:27.269 "params": { 00:23:27.269 "name": "Nvme6", 00:23:27.269 "trtype": "rdma", 00:23:27.269 "traddr": "192.168.100.8", 00:23:27.269 "adrfam": "ipv4", 00:23:27.269 "trsvcid": "4420", 00:23:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:27.269 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:27.269 "hdgst": false, 00:23:27.269 "ddgst": false 00:23:27.269 }, 00:23:27.269 "method": "bdev_nvme_attach_controller" 00:23:27.269 },{ 00:23:27.269 "params": { 00:23:27.269 "name": "Nvme7", 00:23:27.269 "trtype": "rdma", 00:23:27.269 "traddr": "192.168.100.8", 00:23:27.269 "adrfam": "ipv4", 00:23:27.269 "trsvcid": "4420", 00:23:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:27.269 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:27.269 "hdgst": false, 00:23:27.269 "ddgst": false 00:23:27.269 }, 00:23:27.269 "method": "bdev_nvme_attach_controller" 00:23:27.269 },{ 00:23:27.269 "params": { 00:23:27.269 "name": "Nvme8", 00:23:27.269 "trtype": "rdma", 00:23:27.269 "traddr": "192.168.100.8", 00:23:27.269 "adrfam": "ipv4", 00:23:27.269 "trsvcid": "4420", 00:23:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:27.269 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:27.269 "hdgst": false, 00:23:27.269 "ddgst": false 00:23:27.269 }, 00:23:27.269 "method": "bdev_nvme_attach_controller" 00:23:27.269 },{ 00:23:27.269 "params": { 00:23:27.269 "name": "Nvme9", 00:23:27.269 "trtype": "rdma", 00:23:27.269 "traddr": "192.168.100.8", 00:23:27.269 "adrfam": "ipv4", 00:23:27.269 "trsvcid": "4420", 00:23:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:27.269 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:27.269 "hdgst": false, 00:23:27.269 "ddgst": false 00:23:27.269 }, 00:23:27.269 "method": "bdev_nvme_attach_controller" 00:23:27.269 },{ 00:23:27.269 "params": { 00:23:27.269 "name": "Nvme10", 00:23:27.269 "trtype": "rdma", 00:23:27.269 "traddr": "192.168.100.8", 00:23:27.269 "adrfam": "ipv4", 00:23:27.269 "trsvcid": "4420", 00:23:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:27.269 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:27.269 "hdgst": false, 00:23:27.269 "ddgst": false 00:23:27.269 }, 00:23:27.269 "method": "bdev_nvme_attach_controller" 00:23:27.269 }' 00:23:27.269 [2024-11-02 23:22:32.813741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.269 [2024-11-02 23:22:32.881659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.201 Running I/O for 10 seconds... 00:23:28.766 23:22:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:28.766 23:22:34 -- common/autotest_common.sh@852 -- # return 0 00:23:28.766 23:22:34 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:28.766 23:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.766 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:23:28.766 23:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.766 23:22:34 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:28.766 23:22:34 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:28.766 23:22:34 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:28.766 23:22:34 -- target/shutdown.sh@57 -- # local ret=1 00:23:28.766 23:22:34 -- target/shutdown.sh@58 -- # local i 00:23:28.766 23:22:34 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:28.766 23:22:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:28.766 23:22:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:28.766 23:22:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:28.766 23:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.766 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:23:29.023 23:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.023 23:22:34 -- target/shutdown.sh@60 -- # read_io_count=461 00:23:29.023 23:22:34 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:23:29.023 23:22:34 -- target/shutdown.sh@64 -- # ret=0 00:23:29.023 23:22:34 -- target/shutdown.sh@65 -- # break 00:23:29.023 23:22:34 -- target/shutdown.sh@69 -- # return 0 00:23:29.023 23:22:34 -- target/shutdown.sh@109 -- # killprocess 697996 00:23:29.023 23:22:34 -- common/autotest_common.sh@926 -- # '[' -z 697996 ']' 00:23:29.023 23:22:34 -- common/autotest_common.sh@930 -- # kill -0 697996 00:23:29.023 23:22:34 -- common/autotest_common.sh@931 -- # uname 00:23:29.023 23:22:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:29.023 23:22:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 697996 00:23:29.023 23:22:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:29.023 23:22:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:29.023 23:22:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 697996' 00:23:29.023 killing process with pid 697996 00:23:29.023 23:22:34 -- common/autotest_common.sh@945 -- # kill 697996 00:23:29.023 23:22:34 -- common/autotest_common.sh@950 -- # wait 697996 00:23:29.023 Received shutdown signal, test time was about 0.935568 seconds 00:23:29.023 00:23:29.023 Latency(us) 00:23:29.023 [2024-11-02T22:22:34.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme1n1 : 0.93 708.33 44.27 0.00 0.00 89349.14 7549.75 108213.04 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme2n1 : 0.93 707.54 44.22 0.00 0.00 88693.55 7864.32 104857.60 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme3n1 : 0.93 733.66 45.85 0.00 0.00 84936.54 8126.46 100663.30 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme4n1 : 0.93 744.74 46.55 0.00 0.00 82987.29 8283.75 74239.18 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme5n1 : 0.93 743.98 46.50 0.00 0.00 82496.87 8388.61 72980.89 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme6n1 : 0.93 743.22 46.45 0.00 0.00 81986.75 8493.47 71722.60 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme7n1 : 0.93 742.46 46.40 0.00 0.00 81470.42 8650.75 70044.88 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme8n1 : 0.93 741.70 46.36 0.00 0.00 80954.54 8808.04 70883.74 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme9n1 : 0.93 653.18 40.82 0.00 0.00 91235.82 8912.90 152672.67 00:23:29.023 [2024-11-02T22:22:34.780Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.023 Verification LBA range: start 0x0 length 0x400 00:23:29.023 Nvme10n1 : 0.93 652.55 40.78 0.00 0.00 90547.12 7811.89 150156.08 00:23:29.023 [2024-11-02T22:22:34.781Z] =================================================================================================================== 00:23:29.024 [2024-11-02T22:22:34.781Z] Total : 7171.37 448.21 0.00 0.00 85294.97 7549.75 152672.67 00:23:29.281 23:22:34 -- target/shutdown.sh@112 -- # sleep 1 00:23:30.650 23:22:35 -- target/shutdown.sh@113 -- # kill -0 697667 00:23:30.650 23:22:35 -- target/shutdown.sh@115 -- # stoptarget 00:23:30.650 23:22:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:30.650 23:22:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:30.650 23:22:36 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.650 23:22:36 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:30.650 23:22:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:30.650 23:22:36 -- nvmf/common.sh@116 -- # sync 00:23:30.650 23:22:36 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:30.650 23:22:36 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:30.650 23:22:36 -- nvmf/common.sh@119 -- # set +e 00:23:30.650 23:22:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:30.650 23:22:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:30.650 rmmod nvme_rdma 00:23:30.650 rmmod nvme_fabrics 00:23:30.650 23:22:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:30.650 23:22:36 -- nvmf/common.sh@123 -- # set -e 00:23:30.650 23:22:36 -- nvmf/common.sh@124 -- # return 0 00:23:30.650 23:22:36 -- nvmf/common.sh@477 -- # '[' -n 697667 ']' 00:23:30.650 23:22:36 -- nvmf/common.sh@478 -- # killprocess 697667 00:23:30.650 23:22:36 -- common/autotest_common.sh@926 -- # '[' -z 697667 ']' 00:23:30.650 23:22:36 -- common/autotest_common.sh@930 -- # kill -0 697667 00:23:30.650 23:22:36 -- common/autotest_common.sh@931 -- # uname 00:23:30.650 23:22:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:30.650 23:22:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 697667 00:23:30.650 23:22:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:30.650 23:22:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:30.650 23:22:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 697667' 00:23:30.650 killing process with pid 697667 00:23:30.650 23:22:36 -- common/autotest_common.sh@945 -- # kill 697667 00:23:30.650 23:22:36 -- common/autotest_common.sh@950 -- # wait 697667 00:23:30.909 23:22:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:30.909 23:22:36 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:30.909 00:23:30.909 real 0m5.762s 00:23:30.909 user 0m23.328s 00:23:30.909 sys 0m1.221s 00:23:30.909 23:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.909 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 ************************************ 00:23:30.909 END TEST nvmf_shutdown_tc2 00:23:30.909 ************************************ 00:23:30.909 23:22:36 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:30.909 23:22:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:30.909 23:22:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:30.909 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 ************************************ 00:23:30.909 START TEST nvmf_shutdown_tc3 00:23:30.909 ************************************ 00:23:30.909 23:22:36 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:23:30.909 23:22:36 -- target/shutdown.sh@120 -- # starttarget 00:23:30.909 23:22:36 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:30.909 23:22:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:30.909 23:22:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.909 23:22:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:30.909 23:22:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:30.909 23:22:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:30.909 23:22:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.909 23:22:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.909 23:22:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.909 23:22:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:30.909 23:22:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:30.909 23:22:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:30.909 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.909 23:22:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:30.909 23:22:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:30.909 23:22:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:30.909 23:22:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:30.909 23:22:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:30.909 23:22:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:31.168 23:22:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:31.168 23:22:36 -- nvmf/common.sh@294 -- # net_devs=() 00:23:31.168 23:22:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:31.168 23:22:36 -- nvmf/common.sh@295 -- # e810=() 00:23:31.168 23:22:36 -- nvmf/common.sh@295 -- # local -ga e810 00:23:31.169 23:22:36 -- nvmf/common.sh@296 -- # x722=() 00:23:31.169 23:22:36 -- nvmf/common.sh@296 -- # local -ga x722 00:23:31.169 23:22:36 -- nvmf/common.sh@297 -- # mlx=() 00:23:31.169 23:22:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:31.169 23:22:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.169 23:22:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:31.169 23:22:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:31.169 23:22:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:31.169 23:22:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:31.169 23:22:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:31.169 23:22:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:31.169 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:31.169 23:22:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:31.169 23:22:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:31.169 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:31.169 23:22:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:31.169 23:22:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:31.169 23:22:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.169 23:22:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:31.169 23:22:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.169 23:22:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:31.169 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.169 23:22:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.169 23:22:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:31.169 23:22:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.169 23:22:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:31.169 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.169 23:22:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:31.169 23:22:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:31.169 23:22:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:31.169 23:22:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:31.169 23:22:36 -- nvmf/common.sh@57 -- # uname 00:23:31.169 23:22:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:31.169 23:22:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:31.169 23:22:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:31.169 23:22:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:31.169 23:22:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:31.169 23:22:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:31.169 23:22:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:31.169 23:22:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:31.169 23:22:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:31.169 23:22:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:31.169 23:22:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:31.169 23:22:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:31.169 23:22:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:31.169 23:22:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:31.169 23:22:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:31.169 23:22:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:31.169 23:22:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@104 -- # continue 2 00:23:31.169 23:22:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@104 -- # continue 2 00:23:31.169 23:22:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:31.169 23:22:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:31.169 23:22:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:31.169 23:22:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:31.169 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:31.169 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:31.169 altname enp217s0f0np0 00:23:31.169 altname ens818f0np0 00:23:31.169 inet 192.168.100.8/24 scope global mlx_0_0 00:23:31.169 valid_lft forever preferred_lft forever 00:23:31.169 23:22:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:31.169 23:22:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:31.169 23:22:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:31.169 23:22:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:31.169 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:31.169 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:31.169 altname enp217s0f1np1 00:23:31.169 altname ens818f1np1 00:23:31.169 inet 192.168.100.9/24 scope global mlx_0_1 00:23:31.169 valid_lft forever preferred_lft forever 00:23:31.169 23:22:36 -- nvmf/common.sh@410 -- # return 0 00:23:31.169 23:22:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:31.169 23:22:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:31.169 23:22:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:31.169 23:22:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:31.169 23:22:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:31.169 23:22:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:31.169 23:22:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:31.169 23:22:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:31.169 23:22:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:31.169 23:22:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@104 -- # continue 2 00:23:31.169 23:22:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:31.169 23:22:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:31.169 23:22:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@104 -- # continue 2 00:23:31.169 23:22:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:31.169 23:22:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:31.169 23:22:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:31.169 23:22:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:31.169 23:22:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:31.169 23:22:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:31.169 192.168.100.9' 00:23:31.170 23:22:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:31.170 192.168.100.9' 00:23:31.170 23:22:36 -- nvmf/common.sh@445 -- # head -n 1 00:23:31.170 23:22:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:31.170 23:22:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:31.170 192.168.100.9' 00:23:31.170 23:22:36 -- nvmf/common.sh@446 -- # tail -n +2 00:23:31.170 23:22:36 -- nvmf/common.sh@446 -- # head -n 1 00:23:31.170 23:22:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:31.170 23:22:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:31.170 23:22:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:31.170 23:22:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:31.170 23:22:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:31.170 23:22:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:31.170 23:22:36 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:31.170 23:22:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:31.170 23:22:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:31.170 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:23:31.428 23:22:36 -- nvmf/common.sh@469 -- # nvmfpid=698732 00:23:31.428 23:22:36 -- nvmf/common.sh@470 -- # waitforlisten 698732 00:23:31.428 23:22:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:31.428 23:22:36 -- common/autotest_common.sh@819 -- # '[' -z 698732 ']' 00:23:31.428 23:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.428 23:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:31.428 23:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.428 23:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:31.428 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:23:31.428 [2024-11-02 23:22:36.973509] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:31.428 [2024-11-02 23:22:36.973562] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.428 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.428 [2024-11-02 23:22:37.043304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.428 [2024-11-02 23:22:37.116458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:31.428 [2024-11-02 23:22:37.116576] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.428 [2024-11-02 23:22:37.116586] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.428 [2024-11-02 23:22:37.116594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.428 [2024-11-02 23:22:37.116713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.428 [2024-11-02 23:22:37.116733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.428 [2024-11-02 23:22:37.116766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.428 [2024-11-02 23:22:37.116768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:32.360 23:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:32.360 23:22:37 -- common/autotest_common.sh@852 -- # return 0 00:23:32.360 23:22:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:32.360 23:22:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:32.360 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:23:32.360 23:22:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.360 23:22:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:32.360 23:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.360 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:23:32.360 [2024-11-02 23:22:37.864043] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24a4380/0x24a8870) succeed. 00:23:32.360 [2024-11-02 23:22:37.873166] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24a5970/0x24e9f10) succeed. 00:23:32.360 23:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.360 23:22:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:32.360 23:22:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:32.360 23:22:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:32.360 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:23:32.360 23:22:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.360 23:22:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:37 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.360 23:22:38 -- target/shutdown.sh@28 -- # cat 00:23:32.360 23:22:38 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:32.360 23:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:32.360 23:22:38 -- common/autotest_common.sh@10 -- # set +x 00:23:32.360 Malloc1 00:23:32.360 [2024-11-02 23:22:38.094663] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:32.617 Malloc2 00:23:32.617 Malloc3 00:23:32.617 Malloc4 00:23:32.617 Malloc5 00:23:32.617 Malloc6 00:23:32.618 Malloc7 00:23:32.875 Malloc8 00:23:32.875 Malloc9 00:23:32.875 Malloc10 00:23:32.875 23:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:32.875 23:22:38 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:32.875 23:22:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:32.875 23:22:38 -- common/autotest_common.sh@10 -- # set +x 00:23:32.875 23:22:38 -- target/shutdown.sh@124 -- # perfpid=699052 00:23:32.875 23:22:38 -- target/shutdown.sh@125 -- # waitforlisten 699052 /var/tmp/bdevperf.sock 00:23:32.875 23:22:38 -- common/autotest_common.sh@819 -- # '[' -z 699052 ']' 00:23:32.875 23:22:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.875 23:22:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:32.875 23:22:38 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:32.875 23:22:38 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:32.875 23:22:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.875 23:22:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:32.875 23:22:38 -- nvmf/common.sh@520 -- # config=() 00:23:32.875 23:22:38 -- common/autotest_common.sh@10 -- # set +x 00:23:32.875 23:22:38 -- nvmf/common.sh@520 -- # local subsystem config 00:23:32.875 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.875 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 [2024-11-02 23:22:38.589249] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:32.876 [2024-11-02 23:22:38.589299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699052 ] 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:32.876 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.876 23:22:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:32.876 { 00:23:32.876 "params": { 00:23:32.876 "name": "Nvme$subsystem", 00:23:32.876 "trtype": "$TEST_TRANSPORT", 00:23:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.876 "adrfam": "ipv4", 00:23:32.876 "trsvcid": "$NVMF_PORT", 00:23:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.876 "hdgst": ${hdgst:-false}, 00:23:32.876 "ddgst": ${ddgst:-false} 00:23:32.876 }, 00:23:32.876 "method": "bdev_nvme_attach_controller" 00:23:32.876 } 00:23:32.876 EOF 00:23:32.876 )") 00:23:32.876 23:22:38 -- nvmf/common.sh@542 -- # cat 00:23:33.134 23:22:38 -- nvmf/common.sh@544 -- # jq . 00:23:33.134 23:22:38 -- nvmf/common.sh@545 -- # IFS=, 00:23:33.134 23:22:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:33.134 "params": { 00:23:33.134 "name": "Nvme1", 00:23:33.134 "trtype": "rdma", 00:23:33.134 "traddr": "192.168.100.8", 00:23:33.134 "adrfam": "ipv4", 00:23:33.134 "trsvcid": "4420", 00:23:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.134 "hdgst": false, 00:23:33.134 "ddgst": false 00:23:33.134 }, 00:23:33.134 "method": "bdev_nvme_attach_controller" 00:23:33.134 },{ 00:23:33.134 "params": { 00:23:33.134 "name": "Nvme2", 00:23:33.134 "trtype": "rdma", 00:23:33.134 "traddr": "192.168.100.8", 00:23:33.134 "adrfam": "ipv4", 00:23:33.134 "trsvcid": "4420", 00:23:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.134 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.134 "hdgst": false, 00:23:33.134 "ddgst": false 00:23:33.134 }, 00:23:33.134 "method": "bdev_nvme_attach_controller" 00:23:33.134 },{ 00:23:33.134 "params": { 00:23:33.134 "name": "Nvme3", 00:23:33.134 "trtype": "rdma", 00:23:33.134 "traddr": "192.168.100.8", 00:23:33.134 "adrfam": "ipv4", 00:23:33.134 "trsvcid": "4420", 00:23:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.134 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:33.134 "hdgst": false, 00:23:33.134 "ddgst": false 00:23:33.134 }, 00:23:33.134 "method": "bdev_nvme_attach_controller" 00:23:33.134 },{ 00:23:33.134 "params": { 00:23:33.134 "name": "Nvme4", 00:23:33.134 "trtype": "rdma", 00:23:33.134 "traddr": "192.168.100.8", 00:23:33.134 "adrfam": "ipv4", 00:23:33.134 "trsvcid": "4420", 00:23:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.134 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:33.134 "hdgst": false, 00:23:33.134 "ddgst": false 00:23:33.134 }, 00:23:33.134 "method": "bdev_nvme_attach_controller" 00:23:33.134 },{ 00:23:33.134 "params": { 00:23:33.134 "name": "Nvme5", 00:23:33.134 "trtype": "rdma", 00:23:33.134 "traddr": "192.168.100.8", 00:23:33.134 "adrfam": "ipv4", 00:23:33.134 "trsvcid": "4420", 00:23:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:33.134 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:33.134 "hdgst": false, 00:23:33.134 "ddgst": false 00:23:33.134 }, 00:23:33.134 "method": "bdev_nvme_attach_controller" 00:23:33.134 },{ 00:23:33.134 "params": { 00:23:33.134 "name": "Nvme6", 00:23:33.134 "trtype": "rdma", 00:23:33.134 "traddr": "192.168.100.8", 00:23:33.134 "adrfam": "ipv4", 00:23:33.134 "trsvcid": "4420", 00:23:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:33.134 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:33.134 "hdgst": false, 00:23:33.134 "ddgst": false 00:23:33.135 }, 00:23:33.135 "method": "bdev_nvme_attach_controller" 00:23:33.135 },{ 00:23:33.135 "params": { 00:23:33.135 "name": "Nvme7", 00:23:33.135 "trtype": "rdma", 00:23:33.135 "traddr": "192.168.100.8", 00:23:33.135 "adrfam": "ipv4", 00:23:33.135 "trsvcid": "4420", 00:23:33.135 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:33.135 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:33.135 "hdgst": false, 00:23:33.135 "ddgst": false 00:23:33.135 }, 00:23:33.135 "method": "bdev_nvme_attach_controller" 00:23:33.135 },{ 00:23:33.135 "params": { 00:23:33.135 "name": "Nvme8", 00:23:33.135 "trtype": "rdma", 00:23:33.135 "traddr": "192.168.100.8", 00:23:33.135 "adrfam": "ipv4", 00:23:33.135 "trsvcid": "4420", 00:23:33.135 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:33.135 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:33.135 "hdgst": false, 00:23:33.135 "ddgst": false 00:23:33.135 }, 00:23:33.135 "method": "bdev_nvme_attach_controller" 00:23:33.135 },{ 00:23:33.135 "params": { 00:23:33.135 "name": "Nvme9", 00:23:33.135 "trtype": "rdma", 00:23:33.135 "traddr": "192.168.100.8", 00:23:33.135 "adrfam": "ipv4", 00:23:33.135 "trsvcid": "4420", 00:23:33.135 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:33.135 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:33.135 "hdgst": false, 00:23:33.135 "ddgst": false 00:23:33.135 }, 00:23:33.135 "method": "bdev_nvme_attach_controller" 00:23:33.135 },{ 00:23:33.135 "params": { 00:23:33.135 "name": "Nvme10", 00:23:33.135 "trtype": "rdma", 00:23:33.135 "traddr": "192.168.100.8", 00:23:33.135 "adrfam": "ipv4", 00:23:33.135 "trsvcid": "4420", 00:23:33.135 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:33.135 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:33.135 "hdgst": false, 00:23:33.135 "ddgst": false 00:23:33.135 }, 00:23:33.135 "method": "bdev_nvme_attach_controller" 00:23:33.135 }' 00:23:33.135 [2024-11-02 23:22:38.660687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.135 [2024-11-02 23:22:38.727634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.078 Running I/O for 10 seconds... 00:23:34.647 23:22:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:34.647 23:22:40 -- common/autotest_common.sh@852 -- # return 0 00:23:34.647 23:22:40 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:34.647 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.647 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:23:34.647 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.647 23:22:40 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.647 23:22:40 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:34.647 23:22:40 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:34.647 23:22:40 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:34.647 23:22:40 -- target/shutdown.sh@57 -- # local ret=1 00:23:34.647 23:22:40 -- target/shutdown.sh@58 -- # local i 00:23:34.647 23:22:40 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:34.647 23:22:40 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.647 23:22:40 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.647 23:22:40 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.647 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.647 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:23:34.647 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.647 23:22:40 -- target/shutdown.sh@60 -- # read_io_count=461 00:23:34.647 23:22:40 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:23:34.647 23:22:40 -- target/shutdown.sh@64 -- # ret=0 00:23:34.647 23:22:40 -- target/shutdown.sh@65 -- # break 00:23:34.647 23:22:40 -- target/shutdown.sh@69 -- # return 0 00:23:34.647 23:22:40 -- target/shutdown.sh@134 -- # killprocess 698732 00:23:34.647 23:22:40 -- common/autotest_common.sh@926 -- # '[' -z 698732 ']' 00:23:34.647 23:22:40 -- common/autotest_common.sh@930 -- # kill -0 698732 00:23:34.647 23:22:40 -- common/autotest_common.sh@931 -- # uname 00:23:34.647 23:22:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:34.906 23:22:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 698732 00:23:34.906 23:22:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:34.906 23:22:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:34.906 23:22:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 698732' 00:23:34.906 killing process with pid 698732 00:23:34.906 23:22:40 -- common/autotest_common.sh@945 -- # kill 698732 00:23:34.906 23:22:40 -- common/autotest_common.sh@950 -- # wait 698732 00:23:35.471 23:22:40 -- target/shutdown.sh@135 -- # nvmfpid= 00:23:35.471 23:22:40 -- target/shutdown.sh@138 -- # sleep 1 00:23:36.049 [2024-11-02 23:22:41.515096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.515137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:a27356b0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.049 [2024-11-02 23:22:41.515150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.515159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:a27356b0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.049 [2024-11-02 23:22:41.515168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.515176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:a27356b0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.049 [2024-11-02 23:22:41.515185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.515193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:a27356b0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.049 [2024-11-02 23:22:41.517527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.049 [2024-11-02 23:22:41.517577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.049 [2024-11-02 23:22:41.517642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.517677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.517709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.517741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.517772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.517803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.517834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.517865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.520237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.049 [2024-11-02 23:22:41.520278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:36.049 [2024-11-02 23:22:41.520329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.520362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.520402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.520434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.520466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.520496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.520528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.520558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.522576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.049 [2024-11-02 23:22:41.522619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:36.049 [2024-11-02 23:22:41.522668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.522700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.522732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.522763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.522795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.522829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.522838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.522846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.525190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.049 [2024-11-02 23:22:41.525230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:36.049 [2024-11-02 23:22:41.525279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.525311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.525343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.525374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.525407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.525438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.525470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.525501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.527964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.049 [2024-11-02 23:22:41.528012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:36.049 [2024-11-02 23:22:41.528059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.049 [2024-11-02 23:22:41.528124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.049 [2024-11-02 23:22:41.528159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.528168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.528177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.528186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.528194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.530475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.050 [2024-11-02 23:22:41.530516] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:36.050 [2024-11-02 23:22:41.530566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.530599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.530631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.530661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.530693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.530724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.530756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.530786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.533047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.050 [2024-11-02 23:22:41.533087] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:36.050 [2024-11-02 23:22:41.533135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.533167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.533200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.533237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.533269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.533300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.533332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.533362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.535540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.050 [2024-11-02 23:22:41.535580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:36.050 [2024-11-02 23:22:41.535627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.535659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.535691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.535721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.535753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.535783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.535814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.535844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.537755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.050 [2024-11-02 23:22:41.537796] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:36.050 [2024-11-02 23:22:41.537844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.537875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.537907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.537937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.537984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.538016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.538048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.050 [2024-11-02 23:22:41.538078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:56747 cdw0:a27356b0 sqhd:5900 p:1 m:1 dnr:0 00:23:36.050 [2024-11-02 23:22:41.539979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.050 [2024-11-02 23:22:41.540026] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:36.050 [2024-11-02 23:22:41.542413] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:23:36.050 [2024-11-02 23:22:41.542430] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.050 [2024-11-02 23:22:41.543621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000702f180 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071cfe80 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x183400 00:23:36.050 [2024-11-02 23:22:41.543706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183d00 00:23:36.050 [2024-11-02 23:22:41.543798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x183d00 00:23:36.050 [2024-11-02 23:22:41.543919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000718fc80 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.543976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000708f480 len:0x10000 key:0x184200 00:23:36.050 [2024-11-02 23:22:41.543990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.544007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184300 00:23:36.050 [2024-11-02 23:22:41.544020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.544038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b23f800 len:0x10000 key:0x184300 00:23:36.050 [2024-11-02 23:22:41.544051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.050 [2024-11-02 23:22:41.544068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e4ebc0 len:0x10000 key:0x183400 00:23:36.050 [2024-11-02 23:22:41.544082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071bfe00 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000710f880 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2bfc00 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183d00 00:23:36.051 [2024-11-02 23:22:41.544480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x183d00 00:23:36.051 [2024-11-02 23:22:41.544541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b28fa80 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x183d00 00:23:36.051 [2024-11-02 23:22:41.544602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.544758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071eff80 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000207140 len:0x10000 key:0x183d00 00:23:36.051 [2024-11-02 23:22:41.544818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x183d00 00:23:36.051 [2024-11-02 23:22:41.544849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183d00 00:23:36.051 [2024-11-02 23:22:41.544911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x184200 00:23:36.051 [2024-11-02 23:22:41.544942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.544960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.545003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x183400 00:23:36.051 [2024-11-02 23:22:41.545034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x183d00 00:23:36.051 [2024-11-02 23:22:41.545065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.545096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114f0000 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.545129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.545159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68c000 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.545191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d66b000 len:0x10000 key:0x184300 00:23:36.051 [2024-11-02 23:22:41.545221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.051 [2024-11-02 23:22:41.545240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d64a000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001292d000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128a9000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012888000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c24000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c03000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012be2000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012990000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.545617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7ff000 len:0x10000 key:0x184300 00:23:36.052 [2024-11-02 23:22:41.545630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.548874] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:23:36.052 [2024-11-02 23:22:41.548894] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.052 [2024-11-02 23:22:41.548914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000089ef00 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.548928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.548959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000088ee80 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.549025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff680 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.549088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:23:36.052 [2024-11-02 23:22:41.549119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000050f700 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001951f980 len:0x10000 key:0x182a00 00:23:36.052 [2024-11-02 23:22:41.549214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef180 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.549245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000049f380 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008df100 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.549306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084ec80 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.549337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.549370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047f280 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:23:36.052 [2024-11-02 23:22:41.549464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005bfc80 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x184000 00:23:36.052 [2024-11-02 23:22:41.549648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.052 [2024-11-02 23:22:41.549666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000085ed00 len:0x10000 key:0x183700 00:23:36.052 [2024-11-02 23:22:41.549679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.549711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.549745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x183700 00:23:36.053 [2024-11-02 23:22:41.549777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.549808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000086ed80 len:0x10000 key:0x183700 00:23:36.053 [2024-11-02 23:22:41.549838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x183700 00:23:36.053 [2024-11-02 23:22:41.549868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ff880 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.549898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.549929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183700 00:23:36.053 [2024-11-02 23:22:41.549960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.549985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.549998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005efe00 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.550030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.550060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.550093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.550123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.550154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x183700 00:23:36.053 [2024-11-02 23:22:41.550184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.550215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.550247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.550278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.550308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x184000 00:23:36.053 [2024-11-02 23:22:41.550338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.550369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001959fd80 len:0x10000 key:0x182a00 00:23:36.053 [2024-11-02 23:22:41.550401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c2000 len:0x10000 key:0x184300 00:23:36.053 [2024-11-02 23:22:41.550434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106e3000 len:0x10000 key:0x184300 00:23:36.053 [2024-11-02 23:22:41.550466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.053 [2024-11-02 23:22:41.550484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c714000 len:0x10000 key:0x184300 00:23:36.053 [2024-11-02 23:22:41.550497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c735000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c756000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fb000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b3d000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b1c000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ada000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ba0000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.550955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba0f000 len:0x10000 key:0x184300 00:23:36.054 [2024-11-02 23:22:41.550976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.553851] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:23:36.054 [2024-11-02 23:22:41.553873] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.054 [2024-11-02 23:22:41.553893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:23:36.054 [2024-11-02 23:22:41.553906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.553928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194cf700 len:0x10000 key:0x182a00 00:23:36.054 [2024-11-02 23:22:41.553941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.553959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:23:36.054 [2024-11-02 23:22:41.553979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.553997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182b00 00:23:36.054 [2024-11-02 23:22:41.554010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:23:36.054 [2024-11-02 23:22:41.554045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:23:36.054 [2024-11-02 23:22:41.554075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182d00 00:23:36.054 [2024-11-02 23:22:41.554106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x182a00 00:23:36.054 [2024-11-02 23:22:41.554167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:23:36.054 [2024-11-02 23:22:41.554421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182c00 00:23:36.054 [2024-11-02 23:22:41.554452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:23:36.054 [2024-11-02 23:22:41.554483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.054 [2024-11-02 23:22:41.554502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:23:36.054 [2024-11-02 23:22:41.554515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.554547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:23:36.055 [2024-11-02 23:22:41.554578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182d00 00:23:36.055 [2024-11-02 23:22:41.554609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x182b00 00:23:36.055 [2024-11-02 23:22:41.554640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:23:36.055 [2024-11-02 23:22:41.554670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x182a00 00:23:36.055 [2024-11-02 23:22:41.554701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:23:36.055 [2024-11-02 23:22:41.554733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.554763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.554793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.554828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.554859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:23:36.055 [2024-11-02 23:22:41.554890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:23:36.055 [2024-11-02 23:22:41.554920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.554951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.554998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.555013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182c00 00:23:36.055 [2024-11-02 23:22:41.555044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121d4000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121f5000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012216000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012237000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012258000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012279000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ccd000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cee000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d0f000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010890000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c945000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.055 [2024-11-02 23:22:41.555696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x184300 00:23:36.055 [2024-11-02 23:22:41.555709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.555727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x184300 00:23:36.056 [2024-11-02 23:22:41.555740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.555757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184300 00:23:36.056 [2024-11-02 23:22:41.555771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.555789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x184300 00:23:36.056 [2024-11-02 23:22:41.555802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.555820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x184300 00:23:36.056 [2024-11-02 23:22:41.555835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.555855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x184300 00:23:36.056 [2024-11-02 23:22:41.555868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.555886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x184300 00:23:36.056 [2024-11-02 23:22:41.555899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.555917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x184300 00:23:36.056 [2024-11-02 23:22:41.555930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.558962] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:23:36.056 [2024-11-02 23:22:41.558986] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.056 [2024-11-02 23:22:41.559005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x182d00 00:23:36.056 [2024-11-02 23:22:41.559212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182d00 00:23:36.056 [2024-11-02 23:22:41.559305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x182d00 00:23:36.056 [2024-11-02 23:22:41.559552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:23:36.056 [2024-11-02 23:22:41.559646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:23:36.056 [2024-11-02 23:22:41.559708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:23:36.056 [2024-11-02 23:22:41.559739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.056 [2024-11-02 23:22:41.559756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182f00 00:23:36.057 [2024-11-02 23:22:41.559769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.559786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:23:36.057 [2024-11-02 23:22:41.559799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.559817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:23:36.057 [2024-11-02 23:22:41.559830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.559848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:23:36.057 [2024-11-02 23:22:41.559861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.559878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:23:36.057 [2024-11-02 23:22:41.559891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.559911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182f00 00:23:36.057 [2024-11-02 23:22:41.559924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.559942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:23:36.057 [2024-11-02 23:22:41.559955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.559978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:23:36.057 [2024-11-02 23:22:41.559991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:23:36.057 [2024-11-02 23:22:41.560022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182f00 00:23:36.057 [2024-11-02 23:22:41.560052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182e00 00:23:36.057 [2024-11-02 23:22:41.560082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182e00 00:23:36.057 [2024-11-02 23:22:41.560112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da8b000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d51000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d30000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd06000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122dc000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.057 [2024-11-02 23:22:41.560883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122bb000 len:0x10000 key:0x184300 00:23:36.057 [2024-11-02 23:22:41.560896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.566756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001229a000 len:0x10000 key:0x184300 00:23:36.058 [2024-11-02 23:22:41.566772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.566790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x184300 00:23:36.058 [2024-11-02 23:22:41.566803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.566824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x184300 00:23:36.058 [2024-11-02 23:22:41.566837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.569815] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:23:36.058 [2024-11-02 23:22:41.569862] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.058 [2024-11-02 23:22:41.569905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:23:36.058 [2024-11-02 23:22:41.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183100 00:23:36.058 [2024-11-02 23:22:41.570125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:23:36.058 [2024-11-02 23:22:41.570155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:23:36.058 [2024-11-02 23:22:41.570185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183100 00:23:36.058 [2024-11-02 23:22:41.570215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:23:36.058 [2024-11-02 23:22:41.570285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183100 00:23:36.058 [2024-11-02 23:22:41.570316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183100 00:23:36.058 [2024-11-02 23:22:41.570346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:23:36.058 [2024-11-02 23:22:41.570377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:23:36.058 [2024-11-02 23:22:41.570531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:23:36.058 [2024-11-02 23:22:41.570592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:23:36.058 [2024-11-02 23:22:41.570624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183100 00:23:36.058 [2024-11-02 23:22:41.570685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:23:36.058 [2024-11-02 23:22:41.570716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183300 00:23:36.058 [2024-11-02 23:22:41.570746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:23:36.058 [2024-11-02 23:22:41.570776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183100 00:23:36.058 [2024-11-02 23:22:41.570807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.058 [2024-11-02 23:22:41.570825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:23:36.059 [2024-11-02 23:22:41.570838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.570855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183300 00:23:36.059 [2024-11-02 23:22:41.570868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.570886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183300 00:23:36.059 [2024-11-02 23:22:41.570899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.570917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183300 00:23:36.059 [2024-11-02 23:22:41.570930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.570947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:23:36.059 [2024-11-02 23:22:41.570960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.570985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:23:36.059 [2024-11-02 23:22:41.570998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183300 00:23:36.059 [2024-11-02 23:22:41.571028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:23:36.059 [2024-11-02 23:22:41.571058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x183300 00:23:36.059 [2024-11-02 23:22:41.571089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f7e000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010179000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001019a000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101bb000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101dc000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101fd000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7b9000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x184300 00:23:36.059 [2024-11-02 23:22:41.571844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.059 [2024-11-02 23:22:41.571862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x184300 00:23:36.060 [2024-11-02 23:22:41.571875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.571893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100f5000 len:0x10000 key:0x184300 00:23:36.060 [2024-11-02 23:22:41.571906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.571924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100d4000 len:0x10000 key:0x184300 00:23:36.060 [2024-11-02 23:22:41.571937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.571955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100b3000 len:0x10000 key:0x184300 00:23:36.060 [2024-11-02 23:22:41.571972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.574829] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:23:36.060 [2024-11-02 23:22:41.574848] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.060 [2024-11-02 23:22:41.574866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.574880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.574903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.574916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.574934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.574947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.574964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.574984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183100 00:23:36.060 [2024-11-02 23:22:41.575075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183100 00:23:36.060 [2024-11-02 23:22:41.575412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183100 00:23:36.060 [2024-11-02 23:22:41.575503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183500 00:23:36.060 [2024-11-02 23:22:41.575808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.060 [2024-11-02 23:22:41.575826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183f00 00:23:36.060 [2024-11-02 23:22:41.575839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.575856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183f00 00:23:36.061 [2024-11-02 23:22:41.575869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.575886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183500 00:23:36.061 [2024-11-02 23:22:41.575899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.575917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183f00 00:23:36.061 [2024-11-02 23:22:41.575935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.575954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183f00 00:23:36.061 [2024-11-02 23:22:41.575971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.575989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001316d000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001314c000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001084e000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001086f000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecfa000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012867000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012846000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c189000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.576835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c168000 len:0x10000 key:0x184300 00:23:36.061 [2024-11-02 23:22:41.576848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.579330] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:23:36.061 [2024-11-02 23:22:41.579348] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.061 [2024-11-02 23:22:41.579366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183500 00:23:36.061 [2024-11-02 23:22:41.579379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.579400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183a00 00:23:36.061 [2024-11-02 23:22:41.579413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.579431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183a00 00:23:36.061 [2024-11-02 23:22:41.579444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.061 [2024-11-02 23:22:41.579462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.579478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.579509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.579540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.579570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.579600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.579631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.579662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.579692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.579723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.579753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.579784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.579816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.579847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.579877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.579908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.579939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.579975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.579992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.580005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.580035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.580065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183500 00:23:36.062 [2024-11-02 23:22:41.580096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.580126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.580156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.580198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.580229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.580259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.580290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183c00 00:23:36.062 [2024-11-02 23:22:41.580320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.580351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183500 00:23:36.062 [2024-11-02 23:22:41.580381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183a00 00:23:36.062 [2024-11-02 23:22:41.580411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183600 00:23:36.062 [2024-11-02 23:22:41.580442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.062 [2024-11-02 23:22:41.580460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183a00 00:23:36.063 [2024-11-02 23:22:41.580473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.580979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.580997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc9f000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc7e000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc5d000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc3c000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.581341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x184300 00:23:36.063 [2024-11-02 23:22:41.581355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.584131] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:23:36.063 [2024-11-02 23:22:41.584179] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.063 [2024-11-02 23:22:41.584230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183900 00:23:36.063 [2024-11-02 23:22:41.584244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.584265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183200 00:23:36.063 [2024-11-02 23:22:41.584279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.063 [2024-11-02 23:22:41.584297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183200 00:23:36.063 [2024-11-02 23:22:41.584310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.584924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.584978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.584991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.585021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.585146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.585268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.585298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.585360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183900 00:23:36.064 [2024-11-02 23:22:41.585390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.064 [2024-11-02 23:22:41.585408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183200 00:23:36.064 [2024-11-02 23:22:41.585420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183200 00:23:36.065 [2024-11-02 23:22:41.585451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183200 00:23:36.065 [2024-11-02 23:22:41.585483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183200 00:23:36.065 [2024-11-02 23:22:41.585514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183200 00:23:36.065 [2024-11-02 23:22:41.585544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d374000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112bf000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113a6000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011385000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011364000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.585951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.585987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.586019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.586050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.586081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.586112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.586143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.586174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.586207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x184300 00:23:36.065 [2024-11-02 23:22:41.586220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.588959] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:23:36.065 [2024-11-02 23:22:41.589012] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.065 [2024-11-02 23:22:41.589056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183b00 00:23:36.065 [2024-11-02 23:22:41.589089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183800 00:23:36.065 [2024-11-02 23:22:41.589169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183b00 00:23:36.065 [2024-11-02 23:22:41.589244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183800 00:23:36.065 [2024-11-02 23:22:41.589320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184100 00:23:36.065 [2024-11-02 23:22:41.589407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184100 00:23:36.065 [2024-11-02 23:22:41.589439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184100 00:23:36.065 [2024-11-02 23:22:41.589470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184100 00:23:36.065 [2024-11-02 23:22:41.589501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183b00 00:23:36.065 [2024-11-02 23:22:41.589532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183800 00:23:36.065 [2024-11-02 23:22:41.589569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.065 [2024-11-02 23:22:41.589586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183800 00:23:36.066 [2024-11-02 23:22:41.589813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184100 00:23:36.066 [2024-11-02 23:22:41.589907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.589937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183800 00:23:36.066 [2024-11-02 23:22:41.589979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.589997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184100 00:23:36.066 [2024-11-02 23:22:41.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184100 00:23:36.066 [2024-11-02 23:22:41.590132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184100 00:23:36.066 [2024-11-02 23:22:41.590162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184100 00:23:36.066 [2024-11-02 23:22:41.590267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184100 00:23:36.066 [2024-11-02 23:22:41.590446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183800 00:23:36.066 [2024-11-02 23:22:41.590474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183b00 00:23:36.066 [2024-11-02 23:22:41.590559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183800 00:23:36.066 [2024-11-02 23:22:41.590590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d584000 len:0x10000 key:0x184300 00:23:36.066 [2024-11-02 23:22:41.590618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x184300 00:23:36.066 [2024-11-02 23:22:41.590647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x184300 00:23:36.066 [2024-11-02 23:22:41.590676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.066 [2024-11-02 23:22:41.590692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x184300 00:23:36.066 [2024-11-02 23:22:41.590704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114cf000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011910000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001296f000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e877000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e856000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e835000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e814000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.590971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.590990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7f3000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.591018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7d2000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.591047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.591076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011742000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.591105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011763000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.591133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa3000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.591162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f82000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.591191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x184300 00:23:36.067 [2024-11-02 23:22:41.591203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:df7db000 sqhd:5310 p:0 m:0 dnr:0 00:23:36.067 [2024-11-02 23:22:41.608632] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:23:36.067 [2024-11-02 23:22:41.608651] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608721] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608738] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608749] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608761] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608772] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608784] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608795] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608807] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608818] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 [2024-11-02 23:22:41.608829] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:36.067 task offset: 83968 on job bdev=Nvme1n1 fails 00:23:36.067 00:23:36.067 Latency(us) 00:23:36.067 [2024-11-02T22:22:41.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme1n1 ended in about 1.95 seconds with error 00:23:36.067 Verification LBA range: start 0x0 length 0x400 00:23:36.067 Nvme1n1 : 1.95 322.54 20.16 32.87 0.00 179471.81 41104.18 1020054.73 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme2n1 ended in about 1.95 seconds with error 00:23:36.067 Verification LBA range: start 0x0 length 0x400 00:23:36.067 Nvme2n1 : 1.95 303.59 18.97 32.82 0.00 188889.30 41943.04 1093874.48 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme3n1 ended in about 1.96 seconds with error 00:23:36.067 Verification LBA range: start 0x0 length 0x400 00:23:36.067 Nvme3n1 : 1.96 316.06 19.75 32.73 0.00 181554.02 38168.17 1093874.48 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme4n1 ended in about 1.96 seconds with error 00:23:36.067 Verification LBA range: start 0x0 length 0x400 00:23:36.067 Nvme4n1 : 1.96 321.90 20.12 32.65 0.00 178049.41 10013.90 1087163.60 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme5n1 ended in about 1.97 seconds with error 00:23:36.067 Verification LBA range: start 0x0 length 0x400 00:23:36.067 Nvme5n1 : 1.97 318.60 19.91 32.47 0.00 178802.51 37748.74 1093874.48 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme6n1 ended in about 1.98 seconds with error 00:23:36.067 Verification LBA range: start 0x0 length 0x400 00:23:36.067 Nvme6n1 : 1.98 317.77 19.86 32.38 0.00 179159.36 38587.60 1093874.48 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.067 [2024-11-02T22:22:41.824Z] Job: Nvme7n1 ended in about 1.98 seconds with error 00:23:36.067 Verification LBA range: start 0x0 length 0x400 00:23:36.067 Nvme7n1 : 1.98 316.99 19.81 32.30 0.00 178910.10 39426.46 1093874.48 00:23:36.067 [2024-11-02T22:22:41.825Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.068 [2024-11-02T22:22:41.825Z] Job: Nvme8n1 ended in about 1.99 seconds with error 00:23:36.068 Verification LBA range: start 0x0 length 0x400 00:23:36.068 Nvme8n1 : 1.99 316.27 19.77 32.23 0.00 178724.48 40265.32 1093874.48 00:23:36.068 [2024-11-02T22:22:41.825Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.068 [2024-11-02T22:22:41.825Z] Job: Nvme9n1 ended in about 1.99 seconds with error 00:23:36.068 Verification LBA range: start 0x0 length 0x400 00:23:36.068 Nvme9n1 : 1.99 251.70 15.73 32.15 0.00 218769.51 48444.21 1093874.48 00:23:36.068 [2024-11-02T22:22:41.825Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.068 [2024-11-02T22:22:41.825Z] Job: Nvme10n1 ended in about 2.00 seconds with error 00:23:36.068 Verification LBA range: start 0x0 length 0x400 00:23:36.068 Nvme10n1 : 2.00 251.07 15.69 32.07 0.00 218475.73 50751.08 1093874.48 00:23:36.068 [2024-11-02T22:22:41.825Z] =================================================================================================================== 00:23:36.068 [2024-11-02T22:22:41.825Z] Total : 3036.49 189.78 324.69 0.00 186910.22 10013.90 1093874.48 00:23:36.068 [2024-11-02 23:22:41.630610] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:36.068 [2024-11-02 23:22:41.630635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630662] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:36.068 [2024-11-02 23:22:41.630826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:36.068 [2024-11-02 23:22:41.644413] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.644441] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.644463] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:23:36.068 [2024-11-02 23:22:41.644576] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.644591] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.644601] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:23:36.068 [2024-11-02 23:22:41.644684] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.644698] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.644707] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:23:36.068 [2024-11-02 23:22:41.644797] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.644812] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.644822] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:23:36.068 [2024-11-02 23:22:41.644949] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.644963] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.644983] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:23:36.068 [2024-11-02 23:22:41.645073] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.645087] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.645097] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:23:36.068 [2024-11-02 23:22:41.645171] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.645186] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.645196] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:23:36.068 [2024-11-02 23:22:41.645273] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.645287] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.645297] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:23:36.068 [2024-11-02 23:22:41.645372] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.645387] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.645397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:23:36.068 [2024-11-02 23:22:41.645487] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:36.068 [2024-11-02 23:22:41.645502] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:36.068 [2024-11-02 23:22:41.645511] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:23:36.326 23:22:41 -- target/shutdown.sh@141 -- # kill -9 699052 00:23:36.326 23:22:41 -- target/shutdown.sh@143 -- # stoptarget 00:23:36.326 23:22:41 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:36.326 23:22:41 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:36.326 23:22:41 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.326 23:22:41 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:36.326 23:22:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:36.326 23:22:41 -- nvmf/common.sh@116 -- # sync 00:23:36.326 23:22:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:36.326 23:22:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:36.326 23:22:41 -- nvmf/common.sh@119 -- # set +e 00:23:36.326 23:22:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:36.326 23:22:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:36.326 rmmod nvme_rdma 00:23:36.326 rmmod nvme_fabrics 00:23:36.326 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 699052 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:23:36.326 23:22:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:36.326 23:22:42 -- nvmf/common.sh@123 -- # set -e 00:23:36.326 23:22:42 -- nvmf/common.sh@124 -- # return 0 00:23:36.326 23:22:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:23:36.326 23:22:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:36.326 23:22:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:36.326 00:23:36.326 real 0m5.381s 00:23:36.326 user 0m18.452s 00:23:36.326 sys 0m1.304s 00:23:36.326 23:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.326 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.326 ************************************ 00:23:36.326 END TEST nvmf_shutdown_tc3 00:23:36.326 ************************************ 00:23:36.326 23:22:42 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:23:36.326 00:23:36.326 real 0m25.635s 00:23:36.326 user 1m15.478s 00:23:36.326 sys 0m9.253s 00:23:36.326 23:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.326 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.326 ************************************ 00:23:36.326 END TEST nvmf_shutdown 00:23:36.326 ************************************ 00:23:36.584 23:22:42 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:36.584 23:22:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:36.584 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.584 23:22:42 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:36.584 23:22:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:36.584 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.584 23:22:42 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:36.584 23:22:42 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:36.584 23:22:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:36.584 23:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:36.584 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.584 ************************************ 00:23:36.584 START TEST nvmf_multicontroller 00:23:36.584 ************************************ 00:23:36.584 23:22:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:36.584 * Looking for test storage... 00:23:36.584 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:36.584 23:22:42 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.584 23:22:42 -- nvmf/common.sh@7 -- # uname -s 00:23:36.584 23:22:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.584 23:22:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.584 23:22:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.584 23:22:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.584 23:22:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.584 23:22:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.584 23:22:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.584 23:22:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.584 23:22:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.584 23:22:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.584 23:22:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:36.584 23:22:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:36.584 23:22:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.584 23:22:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.584 23:22:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.584 23:22:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:36.584 23:22:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.584 23:22:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.584 23:22:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.584 23:22:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.584 23:22:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.584 23:22:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.584 23:22:42 -- paths/export.sh@5 -- # export PATH 00:23:36.584 23:22:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.584 23:22:42 -- nvmf/common.sh@46 -- # : 0 00:23:36.584 23:22:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:36.584 23:22:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:36.584 23:22:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:36.584 23:22:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.584 23:22:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.584 23:22:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:36.584 23:22:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:36.584 23:22:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:36.584 23:22:42 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.584 23:22:42 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.584 23:22:42 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:36.584 23:22:42 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:36.584 23:22:42 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.584 23:22:42 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:23:36.584 23:22:42 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:36.584 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:36.585 23:22:42 -- host/multicontroller.sh@20 -- # exit 0 00:23:36.585 00:23:36.585 real 0m0.136s 00:23:36.585 user 0m0.054s 00:23:36.585 sys 0m0.092s 00:23:36.585 23:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.585 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.585 ************************************ 00:23:36.585 END TEST nvmf_multicontroller 00:23:36.585 ************************************ 00:23:36.843 23:22:42 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:36.843 23:22:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:36.843 23:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:36.843 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.843 ************************************ 00:23:36.843 START TEST nvmf_aer 00:23:36.843 ************************************ 00:23:36.843 23:22:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:36.843 * Looking for test storage... 00:23:36.843 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:36.843 23:22:42 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.843 23:22:42 -- nvmf/common.sh@7 -- # uname -s 00:23:36.843 23:22:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.843 23:22:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.843 23:22:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.843 23:22:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.843 23:22:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.843 23:22:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.843 23:22:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.843 23:22:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.843 23:22:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.843 23:22:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.843 23:22:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:36.843 23:22:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:36.843 23:22:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.843 23:22:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.843 23:22:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.843 23:22:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:36.843 23:22:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.843 23:22:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.843 23:22:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.843 23:22:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.843 23:22:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.843 23:22:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.843 23:22:42 -- paths/export.sh@5 -- # export PATH 00:23:36.843 23:22:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.843 23:22:42 -- nvmf/common.sh@46 -- # : 0 00:23:36.843 23:22:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:36.843 23:22:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:36.843 23:22:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:36.843 23:22:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.843 23:22:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.843 23:22:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:36.843 23:22:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:36.843 23:22:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:36.843 23:22:42 -- host/aer.sh@11 -- # nvmftestinit 00:23:36.843 23:22:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:36.843 23:22:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.843 23:22:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:36.843 23:22:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:36.843 23:22:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:36.843 23:22:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.843 23:22:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.843 23:22:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.843 23:22:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:36.843 23:22:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:36.843 23:22:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:36.843 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:23:43.397 23:22:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:43.397 23:22:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:43.397 23:22:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:43.397 23:22:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:43.397 23:22:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:43.397 23:22:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:43.397 23:22:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:43.397 23:22:49 -- nvmf/common.sh@294 -- # net_devs=() 00:23:43.397 23:22:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:43.397 23:22:49 -- nvmf/common.sh@295 -- # e810=() 00:23:43.397 23:22:49 -- nvmf/common.sh@295 -- # local -ga e810 00:23:43.397 23:22:49 -- nvmf/common.sh@296 -- # x722=() 00:23:43.397 23:22:49 -- nvmf/common.sh@296 -- # local -ga x722 00:23:43.397 23:22:49 -- nvmf/common.sh@297 -- # mlx=() 00:23:43.397 23:22:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:43.397 23:22:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.397 23:22:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:43.397 23:22:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:43.397 23:22:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:43.397 23:22:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:43.397 23:22:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:43.397 23:22:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:43.397 23:22:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:43.397 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:43.397 23:22:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:43.397 23:22:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:43.397 23:22:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:43.397 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:43.397 23:22:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:43.397 23:22:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:43.397 23:22:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:43.397 23:22:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:43.397 23:22:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.397 23:22:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:43.397 23:22:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.397 23:22:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:43.397 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:43.397 23:22:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.397 23:22:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:43.397 23:22:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.397 23:22:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:43.397 23:22:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.397 23:22:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:43.397 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:43.397 23:22:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.398 23:22:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:43.398 23:22:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:43.398 23:22:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:43.398 23:22:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:43.398 23:22:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:43.398 23:22:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:43.398 23:22:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:43.398 23:22:49 -- nvmf/common.sh@57 -- # uname 00:23:43.398 23:22:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:43.398 23:22:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:43.398 23:22:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:43.398 23:22:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:43.398 23:22:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:43.398 23:22:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:43.398 23:22:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:43.398 23:22:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:43.398 23:22:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:43.398 23:22:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:43.398 23:22:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:43.398 23:22:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:43.398 23:22:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:43.398 23:22:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:43.398 23:22:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:43.656 23:22:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:43.656 23:22:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@104 -- # continue 2 00:23:43.656 23:22:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:43.656 23:22:49 -- nvmf/common.sh@104 -- # continue 2 00:23:43.656 23:22:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:43.656 23:22:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.656 23:22:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:43.656 23:22:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:43.656 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:43.656 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:43.656 altname enp217s0f0np0 00:23:43.656 altname ens818f0np0 00:23:43.656 inet 192.168.100.8/24 scope global mlx_0_0 00:23:43.656 valid_lft forever preferred_lft forever 00:23:43.656 23:22:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:43.656 23:22:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:43.656 23:22:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.656 23:22:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:43.656 23:22:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:43.656 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:43.656 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:43.656 altname enp217s0f1np1 00:23:43.656 altname ens818f1np1 00:23:43.656 inet 192.168.100.9/24 scope global mlx_0_1 00:23:43.656 valid_lft forever preferred_lft forever 00:23:43.656 23:22:49 -- nvmf/common.sh@410 -- # return 0 00:23:43.656 23:22:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:43.656 23:22:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:43.656 23:22:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:43.656 23:22:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:43.656 23:22:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:43.656 23:22:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:43.656 23:22:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:43.656 23:22:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:43.656 23:22:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:43.656 23:22:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@104 -- # continue 2 00:23:43.656 23:22:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.656 23:22:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:43.656 23:22:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:43.656 23:22:49 -- nvmf/common.sh@104 -- # continue 2 00:23:43.656 23:22:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:43.656 23:22:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.656 23:22:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:43.656 23:22:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:43.656 23:22:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:43.656 23:22:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:43.657 23:22:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.657 23:22:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.657 23:22:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:43.657 192.168.100.9' 00:23:43.657 23:22:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:43.657 192.168.100.9' 00:23:43.657 23:22:49 -- nvmf/common.sh@445 -- # head -n 1 00:23:43.657 23:22:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:43.657 23:22:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:43.657 192.168.100.9' 00:23:43.657 23:22:49 -- nvmf/common.sh@446 -- # tail -n +2 00:23:43.657 23:22:49 -- nvmf/common.sh@446 -- # head -n 1 00:23:43.657 23:22:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:43.657 23:22:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:43.657 23:22:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:43.657 23:22:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:43.657 23:22:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:43.657 23:22:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:43.657 23:22:49 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:43.657 23:22:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:43.657 23:22:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:43.657 23:22:49 -- common/autotest_common.sh@10 -- # set +x 00:23:43.657 23:22:49 -- nvmf/common.sh@469 -- # nvmfpid=703138 00:23:43.657 23:22:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:43.657 23:22:49 -- nvmf/common.sh@470 -- # waitforlisten 703138 00:23:43.657 23:22:49 -- common/autotest_common.sh@819 -- # '[' -z 703138 ']' 00:23:43.657 23:22:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.657 23:22:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:43.657 23:22:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.657 23:22:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:43.657 23:22:49 -- common/autotest_common.sh@10 -- # set +x 00:23:43.657 [2024-11-02 23:22:49.372577] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:43.657 [2024-11-02 23:22:49.372626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.657 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.914 [2024-11-02 23:22:49.442849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.914 [2024-11-02 23:22:49.515331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:43.914 [2024-11-02 23:22:49.515472] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.914 [2024-11-02 23:22:49.515483] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.914 [2024-11-02 23:22:49.515491] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.914 [2024-11-02 23:22:49.515542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.914 [2024-11-02 23:22:49.515562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.914 [2024-11-02 23:22:49.515627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.915 [2024-11-02 23:22:49.515629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.477 23:22:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:44.477 23:22:50 -- common/autotest_common.sh@852 -- # return 0 00:23:44.477 23:22:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:44.477 23:22:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:44.477 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.736 23:22:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.736 23:22:50 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:44.736 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.736 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.736 [2024-11-02 23:22:50.262718] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbc5090/0xbc9580) succeed. 00:23:44.736 [2024-11-02 23:22:50.271849] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbc6680/0xc0ac20) succeed. 00:23:44.736 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.736 23:22:50 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:44.736 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.736 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.736 Malloc0 00:23:44.736 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.736 23:22:50 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:44.736 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.736 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.736 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.736 23:22:50 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.736 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.736 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.736 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.736 23:22:50 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:44.736 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.736 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.736 [2024-11-02 23:22:50.446689] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:44.736 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.736 23:22:50 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:44.736 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.736 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.736 [2024-11-02 23:22:50.454282] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:44.736 [ 00:23:44.736 { 00:23:44.736 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:44.736 "subtype": "Discovery", 00:23:44.736 "listen_addresses": [], 00:23:44.736 "allow_any_host": true, 00:23:44.736 "hosts": [] 00:23:44.736 }, 00:23:44.736 { 00:23:44.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.736 "subtype": "NVMe", 00:23:44.736 "listen_addresses": [ 00:23:44.736 { 00:23:44.736 "transport": "RDMA", 00:23:44.736 "trtype": "RDMA", 00:23:44.736 "adrfam": "IPv4", 00:23:44.736 "traddr": "192.168.100.8", 00:23:44.736 "trsvcid": "4420" 00:23:44.736 } 00:23:44.736 ], 00:23:44.736 "allow_any_host": true, 00:23:44.736 "hosts": [], 00:23:44.736 "serial_number": "SPDK00000000000001", 00:23:44.736 "model_number": "SPDK bdev Controller", 00:23:44.736 "max_namespaces": 2, 00:23:44.736 "min_cntlid": 1, 00:23:44.736 "max_cntlid": 65519, 00:23:44.736 "namespaces": [ 00:23:44.736 { 00:23:44.736 "nsid": 1, 00:23:44.736 "bdev_name": "Malloc0", 00:23:44.736 "name": "Malloc0", 00:23:44.736 "nguid": "7079228CFA634D7F8B04BDD7F870B5EE", 00:23:44.736 "uuid": "7079228c-fa63-4d7f-8b04-bdd7f870b5ee" 00:23:44.736 } 00:23:44.736 ] 00:23:44.736 } 00:23:44.736 ] 00:23:44.736 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.736 23:22:50 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:44.736 23:22:50 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:44.736 23:22:50 -- host/aer.sh@33 -- # aerpid=703422 00:23:44.736 23:22:50 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:44.736 23:22:50 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:44.736 23:22:50 -- common/autotest_common.sh@1244 -- # local i=0 00:23:44.736 23:22:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.736 23:22:50 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:23:44.736 23:22:50 -- common/autotest_common.sh@1247 -- # i=1 00:23:44.736 23:22:50 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:23:44.994 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.994 23:22:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.994 23:22:50 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:23:44.994 23:22:50 -- common/autotest_common.sh@1247 -- # i=2 00:23:44.994 23:22:50 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:23:44.994 23:22:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.994 23:22:50 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.994 23:22:50 -- common/autotest_common.sh@1255 -- # return 0 00:23:44.994 23:22:50 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:44.994 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.994 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.994 Malloc1 00:23:44.994 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.994 23:22:50 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:44.994 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.994 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:44.994 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.994 23:22:50 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:44.994 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.994 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.252 [ 00:23:45.252 { 00:23:45.252 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:45.252 "subtype": "Discovery", 00:23:45.252 "listen_addresses": [], 00:23:45.252 "allow_any_host": true, 00:23:45.252 "hosts": [] 00:23:45.252 }, 00:23:45.252 { 00:23:45.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.252 "subtype": "NVMe", 00:23:45.252 "listen_addresses": [ 00:23:45.252 { 00:23:45.252 "transport": "RDMA", 00:23:45.252 "trtype": "RDMA", 00:23:45.252 "adrfam": "IPv4", 00:23:45.252 "traddr": "192.168.100.8", 00:23:45.252 "trsvcid": "4420" 00:23:45.252 } 00:23:45.252 ], 00:23:45.252 "allow_any_host": true, 00:23:45.252 "hosts": [], 00:23:45.252 "serial_number": "SPDK00000000000001", 00:23:45.252 "model_number": "SPDK bdev Controller", 00:23:45.252 "max_namespaces": 2, 00:23:45.252 "min_cntlid": 1, 00:23:45.252 "max_cntlid": 65519, 00:23:45.252 "namespaces": [ 00:23:45.252 { 00:23:45.252 "nsid": 1, 00:23:45.252 "bdev_name": "Malloc0", 00:23:45.252 "name": "Malloc0", 00:23:45.252 "nguid": "7079228CFA634D7F8B04BDD7F870B5EE", 00:23:45.252 "uuid": "7079228c-fa63-4d7f-8b04-bdd7f870b5ee" 00:23:45.252 }, 00:23:45.252 { 00:23:45.252 "nsid": 2, 00:23:45.252 "bdev_name": "Malloc1", 00:23:45.252 "name": "Malloc1", 00:23:45.252 "nguid": "0C6F37C047554581BE81DAE8BD09F2B0", 00:23:45.252 "uuid": "0c6f37c0-4755-4581-be81-dae8bd09f2b0" 00:23:45.252 } 00:23:45.252 ] 00:23:45.252 } 00:23:45.252 ] 00:23:45.252 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.252 23:22:50 -- host/aer.sh@43 -- # wait 703422 00:23:45.252 Asynchronous Event Request test 00:23:45.252 Attaching to 192.168.100.8 00:23:45.252 Attached to 192.168.100.8 00:23:45.252 Registering asynchronous event callbacks... 00:23:45.252 Starting namespace attribute notice tests for all controllers... 00:23:45.252 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:45.252 aer_cb - Changed Namespace 00:23:45.252 Cleaning up... 00:23:45.252 23:22:50 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:45.252 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.252 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.252 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.252 23:22:50 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:45.252 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.252 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.252 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.252 23:22:50 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.252 23:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.252 23:22:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.252 23:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.252 23:22:50 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:45.252 23:22:50 -- host/aer.sh@51 -- # nvmftestfini 00:23:45.252 23:22:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:45.252 23:22:50 -- nvmf/common.sh@116 -- # sync 00:23:45.252 23:22:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:45.252 23:22:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:45.252 23:22:50 -- nvmf/common.sh@119 -- # set +e 00:23:45.252 23:22:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:45.252 23:22:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:45.252 rmmod nvme_rdma 00:23:45.252 rmmod nvme_fabrics 00:23:45.252 23:22:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:45.252 23:22:50 -- nvmf/common.sh@123 -- # set -e 00:23:45.252 23:22:50 -- nvmf/common.sh@124 -- # return 0 00:23:45.252 23:22:50 -- nvmf/common.sh@477 -- # '[' -n 703138 ']' 00:23:45.252 23:22:50 -- nvmf/common.sh@478 -- # killprocess 703138 00:23:45.252 23:22:50 -- common/autotest_common.sh@926 -- # '[' -z 703138 ']' 00:23:45.252 23:22:50 -- common/autotest_common.sh@930 -- # kill -0 703138 00:23:45.252 23:22:50 -- common/autotest_common.sh@931 -- # uname 00:23:45.252 23:22:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:45.252 23:22:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 703138 00:23:45.252 23:22:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:45.252 23:22:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:45.252 23:22:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 703138' 00:23:45.252 killing process with pid 703138 00:23:45.252 23:22:50 -- common/autotest_common.sh@945 -- # kill 703138 00:23:45.252 [2024-11-02 23:22:50.966818] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:45.252 23:22:50 -- common/autotest_common.sh@950 -- # wait 703138 00:23:45.510 23:22:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:45.510 23:22:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:45.510 00:23:45.510 real 0m8.876s 00:23:45.510 user 0m8.669s 00:23:45.510 sys 0m5.763s 00:23:45.510 23:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.510 23:22:51 -- common/autotest_common.sh@10 -- # set +x 00:23:45.510 ************************************ 00:23:45.510 END TEST nvmf_aer 00:23:45.510 ************************************ 00:23:45.768 23:22:51 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:45.768 23:22:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:45.768 23:22:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:45.768 23:22:51 -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 ************************************ 00:23:45.768 START TEST nvmf_async_init 00:23:45.768 ************************************ 00:23:45.768 23:22:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:45.768 * Looking for test storage... 00:23:45.768 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:45.768 23:22:51 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.768 23:22:51 -- nvmf/common.sh@7 -- # uname -s 00:23:45.768 23:22:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.768 23:22:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.768 23:22:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.768 23:22:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.768 23:22:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.768 23:22:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.768 23:22:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.768 23:22:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.768 23:22:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.768 23:22:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.768 23:22:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:45.768 23:22:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:45.768 23:22:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.768 23:22:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.768 23:22:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.768 23:22:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:45.768 23:22:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.768 23:22:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.768 23:22:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.768 23:22:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.768 23:22:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.768 23:22:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.768 23:22:51 -- paths/export.sh@5 -- # export PATH 00:23:45.768 23:22:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.768 23:22:51 -- nvmf/common.sh@46 -- # : 0 00:23:45.768 23:22:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:45.768 23:22:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:45.768 23:22:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:45.769 23:22:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.769 23:22:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.769 23:22:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:45.769 23:22:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:45.769 23:22:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:45.769 23:22:51 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:45.769 23:22:51 -- host/async_init.sh@14 -- # null_block_size=512 00:23:45.769 23:22:51 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:45.769 23:22:51 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:45.769 23:22:51 -- host/async_init.sh@20 -- # uuidgen 00:23:45.769 23:22:51 -- host/async_init.sh@20 -- # tr -d - 00:23:45.769 23:22:51 -- host/async_init.sh@20 -- # nguid=8bd0683ac48d402eacebc0f84cf985f1 00:23:45.769 23:22:51 -- host/async_init.sh@22 -- # nvmftestinit 00:23:45.769 23:22:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:45.769 23:22:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.769 23:22:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:45.769 23:22:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:45.769 23:22:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:45.769 23:22:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.769 23:22:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.769 23:22:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.769 23:22:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:45.769 23:22:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:45.769 23:22:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:45.769 23:22:51 -- common/autotest_common.sh@10 -- # set +x 00:23:52.324 23:22:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:52.324 23:22:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:52.324 23:22:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:52.324 23:22:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:52.324 23:22:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:52.324 23:22:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:52.324 23:22:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:52.324 23:22:57 -- nvmf/common.sh@294 -- # net_devs=() 00:23:52.324 23:22:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:52.324 23:22:57 -- nvmf/common.sh@295 -- # e810=() 00:23:52.324 23:22:57 -- nvmf/common.sh@295 -- # local -ga e810 00:23:52.324 23:22:57 -- nvmf/common.sh@296 -- # x722=() 00:23:52.324 23:22:57 -- nvmf/common.sh@296 -- # local -ga x722 00:23:52.324 23:22:57 -- nvmf/common.sh@297 -- # mlx=() 00:23:52.324 23:22:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:52.324 23:22:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.324 23:22:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:52.324 23:22:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:52.324 23:22:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:52.324 23:22:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:52.324 23:22:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:52.324 23:22:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:52.324 23:22:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:52.324 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:52.324 23:22:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:52.324 23:22:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:52.324 23:22:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:52.324 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:52.324 23:22:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:52.324 23:22:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:52.324 23:22:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:52.324 23:22:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:52.324 23:22:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.324 23:22:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:52.324 23:22:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.324 23:22:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:52.324 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:52.324 23:22:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.325 23:22:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.325 23:22:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:52.325 23:22:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.325 23:22:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:52.325 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.325 23:22:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:52.325 23:22:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:52.325 23:22:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:52.325 23:22:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:52.325 23:22:57 -- nvmf/common.sh@57 -- # uname 00:23:52.325 23:22:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:52.325 23:22:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:52.325 23:22:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:52.325 23:22:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:52.325 23:22:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:52.325 23:22:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:52.325 23:22:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:52.325 23:22:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:52.325 23:22:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:52.325 23:22:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:52.325 23:22:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:52.325 23:22:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:52.325 23:22:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:52.325 23:22:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:52.325 23:22:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:52.325 23:22:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:52.325 23:22:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@104 -- # continue 2 00:23:52.325 23:22:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@104 -- # continue 2 00:23:52.325 23:22:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:52.325 23:22:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.325 23:22:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:52.325 23:22:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:52.325 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:52.325 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:52.325 altname enp217s0f0np0 00:23:52.325 altname ens818f0np0 00:23:52.325 inet 192.168.100.8/24 scope global mlx_0_0 00:23:52.325 valid_lft forever preferred_lft forever 00:23:52.325 23:22:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:52.325 23:22:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.325 23:22:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:52.325 23:22:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:52.325 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:52.325 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:52.325 altname enp217s0f1np1 00:23:52.325 altname ens818f1np1 00:23:52.325 inet 192.168.100.9/24 scope global mlx_0_1 00:23:52.325 valid_lft forever preferred_lft forever 00:23:52.325 23:22:57 -- nvmf/common.sh@410 -- # return 0 00:23:52.325 23:22:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:52.325 23:22:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:52.325 23:22:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:52.325 23:22:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:52.325 23:22:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:52.325 23:22:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:52.325 23:22:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:52.325 23:22:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:52.325 23:22:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:52.325 23:22:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@104 -- # continue 2 00:23:52.325 23:22:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.325 23:22:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:52.325 23:22:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@104 -- # continue 2 00:23:52.325 23:22:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:52.325 23:22:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.325 23:22:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:52.325 23:22:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:52.325 23:22:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:52.325 23:22:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.325 23:22:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.325 23:22:58 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:52.325 192.168.100.9' 00:23:52.325 23:22:58 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:52.325 192.168.100.9' 00:23:52.325 23:22:58 -- nvmf/common.sh@445 -- # head -n 1 00:23:52.325 23:22:58 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:52.325 23:22:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:52.325 192.168.100.9' 00:23:52.325 23:22:58 -- nvmf/common.sh@446 -- # tail -n +2 00:23:52.325 23:22:58 -- nvmf/common.sh@446 -- # head -n 1 00:23:52.325 23:22:58 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:52.325 23:22:58 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:52.325 23:22:58 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:52.325 23:22:58 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:52.325 23:22:58 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:52.325 23:22:58 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:52.325 23:22:58 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:52.325 23:22:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:52.325 23:22:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:52.325 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:23:52.325 23:22:58 -- nvmf/common.sh@469 -- # nvmfpid=706645 00:23:52.325 23:22:58 -- nvmf/common.sh@470 -- # waitforlisten 706645 00:23:52.325 23:22:58 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:52.325 23:22:58 -- common/autotest_common.sh@819 -- # '[' -z 706645 ']' 00:23:52.325 23:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.325 23:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:52.325 23:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.325 23:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:52.325 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:23:52.583 [2024-11-02 23:22:58.113122] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:52.583 [2024-11-02 23:22:58.113174] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.583 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.583 [2024-11-02 23:22:58.183958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.583 [2024-11-02 23:22:58.262128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:52.583 [2024-11-02 23:22:58.262232] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.583 [2024-11-02 23:22:58.262243] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.583 [2024-11-02 23:22:58.262251] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.583 [2024-11-02 23:22:58.262275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.514 23:22:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:53.514 23:22:58 -- common/autotest_common.sh@852 -- # return 0 00:23:53.514 23:22:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:53.514 23:22:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:53.514 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 23:22:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.514 23:22:58 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:53.514 23:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 [2024-11-02 23:22:59.006094] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1287f30/0x128c420) succeed. 00:23:53.514 [2024-11-02 23:22:59.015170] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1289430/0x12cdac0) succeed. 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 null0 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8bd0683ac48d402eacebc0f84cf985f1 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 [2024-11-02 23:22:59.099345] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 nvme0n1 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 [ 00:23:53.514 { 00:23:53.514 "name": "nvme0n1", 00:23:53.514 "aliases": [ 00:23:53.514 "8bd0683a-c48d-402e-aceb-c0f84cf985f1" 00:23:53.514 ], 00:23:53.514 "product_name": "NVMe disk", 00:23:53.514 "block_size": 512, 00:23:53.514 "num_blocks": 2097152, 00:23:53.514 "uuid": "8bd0683a-c48d-402e-aceb-c0f84cf985f1", 00:23:53.514 "assigned_rate_limits": { 00:23:53.514 "rw_ios_per_sec": 0, 00:23:53.514 "rw_mbytes_per_sec": 0, 00:23:53.514 "r_mbytes_per_sec": 0, 00:23:53.514 "w_mbytes_per_sec": 0 00:23:53.514 }, 00:23:53.514 "claimed": false, 00:23:53.514 "zoned": false, 00:23:53.514 "supported_io_types": { 00:23:53.514 "read": true, 00:23:53.514 "write": true, 00:23:53.514 "unmap": false, 00:23:53.514 "write_zeroes": true, 00:23:53.514 "flush": true, 00:23:53.514 "reset": true, 00:23:53.514 "compare": true, 00:23:53.514 "compare_and_write": true, 00:23:53.514 "abort": true, 00:23:53.514 "nvme_admin": true, 00:23:53.514 "nvme_io": true 00:23:53.514 }, 00:23:53.514 "memory_domains": [ 00:23:53.514 { 00:23:53.514 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:53.514 "dma_device_type": 0 00:23:53.514 } 00:23:53.514 ], 00:23:53.514 "driver_specific": { 00:23:53.514 "nvme": [ 00:23:53.514 { 00:23:53.514 "trid": { 00:23:53.514 "trtype": "RDMA", 00:23:53.514 "adrfam": "IPv4", 00:23:53.514 "traddr": "192.168.100.8", 00:23:53.514 "trsvcid": "4420", 00:23:53.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:53.514 }, 00:23:53.514 "ctrlr_data": { 00:23:53.514 "cntlid": 1, 00:23:53.514 "vendor_id": "0x8086", 00:23:53.514 "model_number": "SPDK bdev Controller", 00:23:53.514 "serial_number": "00000000000000000000", 00:23:53.514 "firmware_revision": "24.01.1", 00:23:53.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.514 "oacs": { 00:23:53.514 "security": 0, 00:23:53.514 "format": 0, 00:23:53.514 "firmware": 0, 00:23:53.514 "ns_manage": 0 00:23:53.514 }, 00:23:53.514 "multi_ctrlr": true, 00:23:53.514 "ana_reporting": false 00:23:53.514 }, 00:23:53.514 "vs": { 00:23:53.514 "nvme_version": "1.3" 00:23:53.514 }, 00:23:53.514 "ns_data": { 00:23:53.514 "id": 1, 00:23:53.514 "can_share": true 00:23:53.514 } 00:23:53.514 } 00:23:53.514 ], 00:23:53.514 "mp_policy": "active_passive" 00:23:53.514 } 00:23:53.514 } 00:23:53.514 ] 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 [2024-11-02 23:22:59.212886] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:53.514 [2024-11-02 23:22:59.233681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:53.514 [2024-11-02 23:22:59.256740] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:53.514 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.514 23:22:59 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:53.514 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.514 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.514 [ 00:23:53.514 { 00:23:53.514 "name": "nvme0n1", 00:23:53.514 "aliases": [ 00:23:53.514 "8bd0683a-c48d-402e-aceb-c0f84cf985f1" 00:23:53.514 ], 00:23:53.514 "product_name": "NVMe disk", 00:23:53.771 "block_size": 512, 00:23:53.771 "num_blocks": 2097152, 00:23:53.771 "uuid": "8bd0683a-c48d-402e-aceb-c0f84cf985f1", 00:23:53.771 "assigned_rate_limits": { 00:23:53.771 "rw_ios_per_sec": 0, 00:23:53.771 "rw_mbytes_per_sec": 0, 00:23:53.771 "r_mbytes_per_sec": 0, 00:23:53.771 "w_mbytes_per_sec": 0 00:23:53.771 }, 00:23:53.771 "claimed": false, 00:23:53.771 "zoned": false, 00:23:53.771 "supported_io_types": { 00:23:53.771 "read": true, 00:23:53.771 "write": true, 00:23:53.771 "unmap": false, 00:23:53.771 "write_zeroes": true, 00:23:53.771 "flush": true, 00:23:53.771 "reset": true, 00:23:53.771 "compare": true, 00:23:53.771 "compare_and_write": true, 00:23:53.771 "abort": true, 00:23:53.771 "nvme_admin": true, 00:23:53.771 "nvme_io": true 00:23:53.771 }, 00:23:53.771 "memory_domains": [ 00:23:53.771 { 00:23:53.771 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:53.771 "dma_device_type": 0 00:23:53.771 } 00:23:53.771 ], 00:23:53.771 "driver_specific": { 00:23:53.771 "nvme": [ 00:23:53.771 { 00:23:53.771 "trid": { 00:23:53.771 "trtype": "RDMA", 00:23:53.771 "adrfam": "IPv4", 00:23:53.771 "traddr": "192.168.100.8", 00:23:53.771 "trsvcid": "4420", 00:23:53.771 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:53.771 }, 00:23:53.771 "ctrlr_data": { 00:23:53.771 "cntlid": 2, 00:23:53.771 "vendor_id": "0x8086", 00:23:53.771 "model_number": "SPDK bdev Controller", 00:23:53.771 "serial_number": "00000000000000000000", 00:23:53.771 "firmware_revision": "24.01.1", 00:23:53.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.771 "oacs": { 00:23:53.771 "security": 0, 00:23:53.771 "format": 0, 00:23:53.771 "firmware": 0, 00:23:53.771 "ns_manage": 0 00:23:53.771 }, 00:23:53.771 "multi_ctrlr": true, 00:23:53.771 "ana_reporting": false 00:23:53.771 }, 00:23:53.771 "vs": { 00:23:53.771 "nvme_version": "1.3" 00:23:53.771 }, 00:23:53.771 "ns_data": { 00:23:53.771 "id": 1, 00:23:53.771 "can_share": true 00:23:53.771 } 00:23:53.771 } 00:23:53.771 ], 00:23:53.771 "mp_policy": "active_passive" 00:23:53.771 } 00:23:53.771 } 00:23:53.771 ] 00:23:53.771 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.771 23:22:59 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.771 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.771 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.771 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.771 23:22:59 -- host/async_init.sh@53 -- # mktemp 00:23:53.771 23:22:59 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6xoEbjIH5U 00:23:53.771 23:22:59 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:53.771 23:22:59 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6xoEbjIH5U 00:23:53.771 23:22:59 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:53.771 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.771 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.771 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.771 23:22:59 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:23:53.771 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.771 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.771 [2024-11-02 23:22:59.331869] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:53.771 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.771 23:22:59 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xoEbjIH5U 00:23:53.771 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.771 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.771 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.771 23:22:59 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6xoEbjIH5U 00:23:53.771 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.771 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.771 [2024-11-02 23:22:59.347893] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.771 nvme0n1 00:23:53.771 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.771 23:22:59 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:53.771 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.771 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.771 [ 00:23:53.771 { 00:23:53.771 "name": "nvme0n1", 00:23:53.771 "aliases": [ 00:23:53.771 "8bd0683a-c48d-402e-aceb-c0f84cf985f1" 00:23:53.771 ], 00:23:53.771 "product_name": "NVMe disk", 00:23:53.771 "block_size": 512, 00:23:53.771 "num_blocks": 2097152, 00:23:53.771 "uuid": "8bd0683a-c48d-402e-aceb-c0f84cf985f1", 00:23:53.771 "assigned_rate_limits": { 00:23:53.771 "rw_ios_per_sec": 0, 00:23:53.771 "rw_mbytes_per_sec": 0, 00:23:53.771 "r_mbytes_per_sec": 0, 00:23:53.771 "w_mbytes_per_sec": 0 00:23:53.771 }, 00:23:53.771 "claimed": false, 00:23:53.771 "zoned": false, 00:23:53.771 "supported_io_types": { 00:23:53.771 "read": true, 00:23:53.771 "write": true, 00:23:53.771 "unmap": false, 00:23:53.771 "write_zeroes": true, 00:23:53.771 "flush": true, 00:23:53.771 "reset": true, 00:23:53.771 "compare": true, 00:23:53.771 "compare_and_write": true, 00:23:53.771 "abort": true, 00:23:53.771 "nvme_admin": true, 00:23:53.771 "nvme_io": true 00:23:53.771 }, 00:23:53.771 "memory_domains": [ 00:23:53.771 { 00:23:53.771 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:53.771 "dma_device_type": 0 00:23:53.771 } 00:23:53.771 ], 00:23:53.771 "driver_specific": { 00:23:53.771 "nvme": [ 00:23:53.772 { 00:23:53.772 "trid": { 00:23:53.772 "trtype": "RDMA", 00:23:53.772 "adrfam": "IPv4", 00:23:53.772 "traddr": "192.168.100.8", 00:23:53.772 "trsvcid": "4421", 00:23:53.772 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:53.772 }, 00:23:53.772 "ctrlr_data": { 00:23:53.772 "cntlid": 3, 00:23:53.772 "vendor_id": "0x8086", 00:23:53.772 "model_number": "SPDK bdev Controller", 00:23:53.772 "serial_number": "00000000000000000000", 00:23:53.772 "firmware_revision": "24.01.1", 00:23:53.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.772 "oacs": { 00:23:53.772 "security": 0, 00:23:53.772 "format": 0, 00:23:53.772 "firmware": 0, 00:23:53.772 "ns_manage": 0 00:23:53.772 }, 00:23:53.772 "multi_ctrlr": true, 00:23:53.772 "ana_reporting": false 00:23:53.772 }, 00:23:53.772 "vs": { 00:23:53.772 "nvme_version": "1.3" 00:23:53.772 }, 00:23:53.772 "ns_data": { 00:23:53.772 "id": 1, 00:23:53.772 "can_share": true 00:23:53.772 } 00:23:53.772 } 00:23:53.772 ], 00:23:53.772 "mp_policy": "active_passive" 00:23:53.772 } 00:23:53.772 } 00:23:53.772 ] 00:23:53.772 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.772 23:22:59 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.772 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.772 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:53.772 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.772 23:22:59 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.6xoEbjIH5U 00:23:53.772 23:22:59 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:53.772 23:22:59 -- host/async_init.sh@78 -- # nvmftestfini 00:23:53.772 23:22:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:53.772 23:22:59 -- nvmf/common.sh@116 -- # sync 00:23:53.772 23:22:59 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:53.772 23:22:59 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:53.772 23:22:59 -- nvmf/common.sh@119 -- # set +e 00:23:53.772 23:22:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:53.772 23:22:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:53.772 rmmod nvme_rdma 00:23:53.772 rmmod nvme_fabrics 00:23:54.028 23:22:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:54.028 23:22:59 -- nvmf/common.sh@123 -- # set -e 00:23:54.028 23:22:59 -- nvmf/common.sh@124 -- # return 0 00:23:54.028 23:22:59 -- nvmf/common.sh@477 -- # '[' -n 706645 ']' 00:23:54.028 23:22:59 -- nvmf/common.sh@478 -- # killprocess 706645 00:23:54.028 23:22:59 -- common/autotest_common.sh@926 -- # '[' -z 706645 ']' 00:23:54.028 23:22:59 -- common/autotest_common.sh@930 -- # kill -0 706645 00:23:54.028 23:22:59 -- common/autotest_common.sh@931 -- # uname 00:23:54.028 23:22:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:54.028 23:22:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 706645 00:23:54.028 23:22:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:54.028 23:22:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:54.028 23:22:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 706645' 00:23:54.028 killing process with pid 706645 00:23:54.028 23:22:59 -- common/autotest_common.sh@945 -- # kill 706645 00:23:54.028 23:22:59 -- common/autotest_common.sh@950 -- # wait 706645 00:23:54.287 23:22:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:54.287 23:22:59 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:54.287 00:23:54.287 real 0m8.556s 00:23:54.287 user 0m3.783s 00:23:54.287 sys 0m5.447s 00:23:54.287 23:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.287 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:54.287 ************************************ 00:23:54.287 END TEST nvmf_async_init 00:23:54.287 ************************************ 00:23:54.287 23:22:59 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:54.287 23:22:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:54.287 23:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:54.287 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:23:54.287 ************************************ 00:23:54.287 START TEST dma 00:23:54.287 ************************************ 00:23:54.287 23:22:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:54.287 * Looking for test storage... 00:23:54.287 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:54.287 23:22:59 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.287 23:22:59 -- nvmf/common.sh@7 -- # uname -s 00:23:54.287 23:22:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.287 23:22:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.287 23:22:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.287 23:22:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.287 23:22:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.287 23:22:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.287 23:22:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.287 23:22:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.287 23:22:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.287 23:22:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.287 23:23:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:54.287 23:23:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:54.287 23:23:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.287 23:23:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.287 23:23:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.287 23:23:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:54.287 23:23:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.287 23:23:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.287 23:23:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.287 23:23:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.287 23:23:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.287 23:23:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.287 23:23:00 -- paths/export.sh@5 -- # export PATH 00:23:54.287 23:23:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.287 23:23:00 -- nvmf/common.sh@46 -- # : 0 00:23:54.287 23:23:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:54.287 23:23:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:54.287 23:23:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:54.287 23:23:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.287 23:23:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.287 23:23:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:54.287 23:23:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:54.287 23:23:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:54.287 23:23:00 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:23:54.287 23:23:00 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:23:54.287 23:23:00 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:23:54.287 23:23:00 -- host/dma.sh@18 -- # subsystem=0 00:23:54.287 23:23:00 -- host/dma.sh@93 -- # nvmftestinit 00:23:54.287 23:23:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:54.287 23:23:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.287 23:23:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:54.287 23:23:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:54.287 23:23:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:54.287 23:23:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.287 23:23:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.287 23:23:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.287 23:23:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:54.287 23:23:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:54.287 23:23:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:54.287 23:23:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.930 23:23:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:00.930 23:23:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:00.930 23:23:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:00.930 23:23:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:00.931 23:23:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:00.931 23:23:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:00.931 23:23:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:00.931 23:23:06 -- nvmf/common.sh@294 -- # net_devs=() 00:24:00.931 23:23:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:00.931 23:23:06 -- nvmf/common.sh@295 -- # e810=() 00:24:00.931 23:23:06 -- nvmf/common.sh@295 -- # local -ga e810 00:24:00.931 23:23:06 -- nvmf/common.sh@296 -- # x722=() 00:24:00.931 23:23:06 -- nvmf/common.sh@296 -- # local -ga x722 00:24:00.931 23:23:06 -- nvmf/common.sh@297 -- # mlx=() 00:24:00.931 23:23:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:00.931 23:23:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.931 23:23:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:00.931 23:23:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:00.931 23:23:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:00.931 23:23:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:00.931 23:23:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:00.931 23:23:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:00.931 23:23:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:00.931 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:00.931 23:23:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:00.931 23:23:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:00.931 23:23:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:00.931 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:00.931 23:23:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:00.931 23:23:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:00.931 23:23:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:00.931 23:23:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.931 23:23:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:00.931 23:23:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.931 23:23:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:00.931 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:00.931 23:23:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.931 23:23:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:00.931 23:23:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.931 23:23:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:00.931 23:23:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.931 23:23:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:00.931 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:00.931 23:23:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.931 23:23:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:00.931 23:23:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:00.931 23:23:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:00.931 23:23:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:00.931 23:23:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:00.931 23:23:06 -- nvmf/common.sh@57 -- # uname 00:24:00.931 23:23:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:00.931 23:23:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:00.931 23:23:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:00.931 23:23:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:01.190 23:23:06 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:01.190 23:23:06 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:01.190 23:23:06 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:01.190 23:23:06 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:01.190 23:23:06 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:01.190 23:23:06 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:01.190 23:23:06 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:01.190 23:23:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:01.190 23:23:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:01.190 23:23:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:01.190 23:23:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:01.190 23:23:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:01.190 23:23:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@104 -- # continue 2 00:24:01.190 23:23:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@104 -- # continue 2 00:24:01.190 23:23:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:01.190 23:23:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:01.190 23:23:06 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:01.190 23:23:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:01.190 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:01.190 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:01.190 altname enp217s0f0np0 00:24:01.190 altname ens818f0np0 00:24:01.190 inet 192.168.100.8/24 scope global mlx_0_0 00:24:01.190 valid_lft forever preferred_lft forever 00:24:01.190 23:23:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:01.190 23:23:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:01.190 23:23:06 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:01.190 23:23:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:01.190 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:01.190 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:01.190 altname enp217s0f1np1 00:24:01.190 altname ens818f1np1 00:24:01.190 inet 192.168.100.9/24 scope global mlx_0_1 00:24:01.190 valid_lft forever preferred_lft forever 00:24:01.190 23:23:06 -- nvmf/common.sh@410 -- # return 0 00:24:01.190 23:23:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:01.190 23:23:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:01.190 23:23:06 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:01.190 23:23:06 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:01.190 23:23:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:01.190 23:23:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:01.190 23:23:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:01.190 23:23:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:01.190 23:23:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:01.190 23:23:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@104 -- # continue 2 00:24:01.190 23:23:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:01.190 23:23:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:01.190 23:23:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@104 -- # continue 2 00:24:01.190 23:23:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:01.190 23:23:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:01.190 23:23:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:01.190 23:23:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:01.190 23:23:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:01.190 23:23:06 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:01.190 192.168.100.9' 00:24:01.190 23:23:06 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:01.190 192.168.100.9' 00:24:01.190 23:23:06 -- nvmf/common.sh@445 -- # head -n 1 00:24:01.190 23:23:06 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:01.190 23:23:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:01.190 192.168.100.9' 00:24:01.190 23:23:06 -- nvmf/common.sh@446 -- # tail -n +2 00:24:01.190 23:23:06 -- nvmf/common.sh@446 -- # head -n 1 00:24:01.190 23:23:06 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:01.190 23:23:06 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:01.190 23:23:06 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:01.190 23:23:06 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:01.190 23:23:06 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:01.190 23:23:06 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:01.190 23:23:06 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:01.190 23:23:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:01.190 23:23:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:01.190 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:24:01.190 23:23:06 -- nvmf/common.sh@469 -- # nvmfpid=710334 00:24:01.190 23:23:06 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:01.190 23:23:06 -- nvmf/common.sh@470 -- # waitforlisten 710334 00:24:01.190 23:23:06 -- common/autotest_common.sh@819 -- # '[' -z 710334 ']' 00:24:01.190 23:23:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.190 23:23:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:01.190 23:23:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.190 23:23:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:01.190 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:24:01.448 [2024-11-02 23:23:06.953333] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:01.449 [2024-11-02 23:23:06.953382] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.449 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.449 [2024-11-02 23:23:07.021901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:01.449 [2024-11-02 23:23:07.089107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:01.449 [2024-11-02 23:23:07.089225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.449 [2024-11-02 23:23:07.089235] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.449 [2024-11-02 23:23:07.089243] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.449 [2024-11-02 23:23:07.089293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.449 [2024-11-02 23:23:07.089295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.014 23:23:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:02.014 23:23:07 -- common/autotest_common.sh@852 -- # return 0 00:24:02.014 23:23:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:02.014 23:23:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:02.014 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:24:02.272 23:23:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.272 23:23:07 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:02.272 23:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.272 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:24:02.273 [2024-11-02 23:23:07.841645] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbb6a60/0xbbaf50) succeed. 00:24:02.273 [2024-11-02 23:23:07.850591] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbb7f60/0xbfc5f0) succeed. 00:24:02.273 23:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.273 23:23:07 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:02.273 23:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.273 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:24:02.273 Malloc0 00:24:02.273 23:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.273 23:23:07 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:02.273 23:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.273 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:24:02.273 23:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.273 23:23:07 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:02.273 23:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.273 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:24:02.273 23:23:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.273 23:23:08 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:02.273 23:23:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.273 23:23:08 -- common/autotest_common.sh@10 -- # set +x 00:24:02.273 [2024-11-02 23:23:08.012004] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:02.273 23:23:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.273 23:23:08 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:02.273 23:23:08 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:02.273 23:23:08 -- nvmf/common.sh@520 -- # config=() 00:24:02.273 23:23:08 -- nvmf/common.sh@520 -- # local subsystem config 00:24:02.273 23:23:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.273 23:23:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.273 { 00:24:02.273 "params": { 00:24:02.273 "name": "Nvme$subsystem", 00:24:02.273 "trtype": "$TEST_TRANSPORT", 00:24:02.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.273 "adrfam": "ipv4", 00:24:02.273 "trsvcid": "$NVMF_PORT", 00:24:02.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.273 "hdgst": ${hdgst:-false}, 00:24:02.273 "ddgst": ${ddgst:-false} 00:24:02.273 }, 00:24:02.273 "method": "bdev_nvme_attach_controller" 00:24:02.273 } 00:24:02.273 EOF 00:24:02.273 )") 00:24:02.273 23:23:08 -- nvmf/common.sh@542 -- # cat 00:24:02.273 23:23:08 -- nvmf/common.sh@544 -- # jq . 00:24:02.530 23:23:08 -- nvmf/common.sh@545 -- # IFS=, 00:24:02.530 23:23:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:02.530 "params": { 00:24:02.530 "name": "Nvme0", 00:24:02.530 "trtype": "rdma", 00:24:02.530 "traddr": "192.168.100.8", 00:24:02.530 "adrfam": "ipv4", 00:24:02.530 "trsvcid": "4420", 00:24:02.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.530 "hdgst": false, 00:24:02.530 "ddgst": false 00:24:02.530 }, 00:24:02.530 "method": "bdev_nvme_attach_controller" 00:24:02.530 }' 00:24:02.530 [2024-11-02 23:23:08.060795] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:02.530 [2024-11-02 23:23:08.060841] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710600 ] 00:24:02.530 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.530 [2024-11-02 23:23:08.126367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:02.530 [2024-11-02 23:23:08.194666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.530 [2024-11-02 23:23:08.194669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.082 bdev Nvme0n1 reports 1 memory domains 00:24:09.082 bdev Nvme0n1 supports RDMA memory domain 00:24:09.082 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:09.082 ========================================================================== 00:24:09.082 Latency [us] 00:24:09.082 IOPS MiB/s Average min max 00:24:09.082 Core 2: 22219.79 86.80 719.41 233.83 8072.87 00:24:09.082 Core 3: 22327.58 87.22 715.90 240.48 8099.27 00:24:09.082 ========================================================================== 00:24:09.082 Total : 44547.37 174.01 717.65 233.83 8099.27 00:24:09.082 00:24:09.082 Total operations: 222753, translate 222753 pull_push 0 memzero 0 00:24:09.082 23:23:13 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:09.082 23:23:13 -- host/dma.sh@107 -- # gen_malloc_json 00:24:09.082 23:23:13 -- host/dma.sh@21 -- # jq . 00:24:09.083 [2024-11-02 23:23:13.652842] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:09.083 [2024-11-02 23:23:13.652900] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711437 ] 00:24:09.083 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.083 [2024-11-02 23:23:13.720127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:09.083 [2024-11-02 23:23:13.784658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.083 [2024-11-02 23:23:13.784661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.348 bdev Malloc0 reports 1 memory domains 00:24:14.348 bdev Malloc0 doesn't support RDMA memory domain 00:24:14.348 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:14.348 ========================================================================== 00:24:14.348 Latency [us] 00:24:14.348 IOPS MiB/s Average min max 00:24:14.348 Core 2: 14985.43 58.54 1066.99 391.91 1346.16 00:24:14.348 Core 3: 15260.93 59.61 1047.69 398.47 1922.08 00:24:14.348 ========================================================================== 00:24:14.348 Total : 30246.35 118.15 1057.25 391.91 1922.08 00:24:14.348 00:24:14.348 Total operations: 151286, translate 0 pull_push 605144 memzero 0 00:24:14.348 23:23:19 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:14.348 23:23:19 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:14.348 23:23:19 -- host/dma.sh@48 -- # local subsystem=0 00:24:14.348 23:23:19 -- host/dma.sh@50 -- # jq . 00:24:14.348 Ignoring -M option 00:24:14.348 [2024-11-02 23:23:19.153720] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:14.348 [2024-11-02 23:23:19.153771] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid712446 ] 00:24:14.348 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.348 [2024-11-02 23:23:19.218724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:14.348 [2024-11-02 23:23:19.286347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.348 [2024-11-02 23:23:19.286350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.348 [2024-11-02 23:23:19.493667] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:19.619 [2024-11-02 23:23:24.522974] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:19.619 bdev 5a80a379-ee69-4265-9a04-e88a7cc488c4 reports 1 memory domains 00:24:19.619 bdev 5a80a379-ee69-4265-9a04-e88a7cc488c4 supports RDMA memory domain 00:24:19.619 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:19.619 ========================================================================== 00:24:19.619 Latency [us] 00:24:19.619 IOPS MiB/s Average min max 00:24:19.620 Core 2: 71735.01 280.21 222.16 81.25 1580.31 00:24:19.620 Core 3: 69837.01 272.80 228.16 70.88 1495.80 00:24:19.620 ========================================================================== 00:24:19.620 Total : 141572.02 553.02 225.12 70.88 1580.31 00:24:19.620 00:24:19.620 Total operations: 707933, translate 0 pull_push 0 memzero 707933 00:24:19.620 23:23:24 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:19.620 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.620 [2024-11-02 23:23:24.848513] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:21.525 Initializing NVMe Controllers 00:24:21.525 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:24:21.525 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:21.525 Initialization complete. Launching workers. 00:24:21.525 ======================================================== 00:24:21.525 Latency(us) 00:24:21.525 Device Information : IOPS MiB/s Average min max 00:24:21.525 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7995.93 4987.99 11939.13 00:24:21.525 ======================================================== 00:24:21.525 Total : 2016.00 7.88 7995.93 4987.99 11939.13 00:24:21.525 00:24:21.525 23:23:27 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:24:21.525 23:23:27 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:24:21.525 23:23:27 -- host/dma.sh@48 -- # local subsystem=0 00:24:21.525 23:23:27 -- host/dma.sh@50 -- # jq . 00:24:21.525 [2024-11-02 23:23:27.196034] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:21.525 [2024-11-02 23:23:27.196091] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713808 ] 00:24:21.525 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.525 [2024-11-02 23:23:27.262667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:21.784 [2024-11-02 23:23:27.331994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.784 [2024-11-02 23:23:27.331998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.784 [2024-11-02 23:23:27.538987] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:27.051 [2024-11-02 23:23:32.568735] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:27.051 bdev 759bbff7-c5f1-40a0-90e1-39d901f2ed75 reports 1 memory domains 00:24:27.051 bdev 759bbff7-c5f1-40a0-90e1-39d901f2ed75 supports RDMA memory domain 00:24:27.051 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:27.051 ========================================================================== 00:24:27.051 Latency [us] 00:24:27.051 IOPS MiB/s Average min max 00:24:27.051 Core 2: 19492.41 76.14 820.17 42.70 10202.62 00:24:27.051 Core 3: 19850.95 77.54 805.30 12.87 10383.32 00:24:27.051 ========================================================================== 00:24:27.051 Total : 39343.36 153.69 812.67 12.87 10383.32 00:24:27.051 00:24:27.051 Total operations: 196749, translate 196644 pull_push 0 memzero 105 00:24:27.051 23:23:32 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:27.051 23:23:32 -- host/dma.sh@120 -- # nvmftestfini 00:24:27.051 23:23:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:27.051 23:23:32 -- nvmf/common.sh@116 -- # sync 00:24:27.051 23:23:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:27.052 23:23:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:27.052 23:23:32 -- nvmf/common.sh@119 -- # set +e 00:24:27.052 23:23:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:27.310 23:23:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:27.310 rmmod nvme_rdma 00:24:27.310 rmmod nvme_fabrics 00:24:27.310 23:23:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:27.310 23:23:32 -- nvmf/common.sh@123 -- # set -e 00:24:27.310 23:23:32 -- nvmf/common.sh@124 -- # return 0 00:24:27.310 23:23:32 -- nvmf/common.sh@477 -- # '[' -n 710334 ']' 00:24:27.310 23:23:32 -- nvmf/common.sh@478 -- # killprocess 710334 00:24:27.310 23:23:32 -- common/autotest_common.sh@926 -- # '[' -z 710334 ']' 00:24:27.310 23:23:32 -- common/autotest_common.sh@930 -- # kill -0 710334 00:24:27.310 23:23:32 -- common/autotest_common.sh@931 -- # uname 00:24:27.310 23:23:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.310 23:23:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 710334 00:24:27.310 23:23:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:27.310 23:23:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:27.310 23:23:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 710334' 00:24:27.310 killing process with pid 710334 00:24:27.310 23:23:32 -- common/autotest_common.sh@945 -- # kill 710334 00:24:27.310 23:23:32 -- common/autotest_common.sh@950 -- # wait 710334 00:24:27.569 23:23:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:27.569 23:23:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:27.569 00:24:27.569 real 0m33.347s 00:24:27.569 user 1m36.996s 00:24:27.569 sys 0m6.497s 00:24:27.569 23:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.569 23:23:33 -- common/autotest_common.sh@10 -- # set +x 00:24:27.569 ************************************ 00:24:27.569 END TEST dma 00:24:27.569 ************************************ 00:24:27.569 23:23:33 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:27.569 23:23:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:27.569 23:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:27.569 23:23:33 -- common/autotest_common.sh@10 -- # set +x 00:24:27.569 ************************************ 00:24:27.569 START TEST nvmf_identify 00:24:27.569 ************************************ 00:24:27.569 23:23:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:27.828 * Looking for test storage... 00:24:27.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:27.828 23:23:33 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.828 23:23:33 -- nvmf/common.sh@7 -- # uname -s 00:24:27.828 23:23:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.828 23:23:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.828 23:23:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.828 23:23:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.828 23:23:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.828 23:23:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.828 23:23:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.828 23:23:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.828 23:23:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.828 23:23:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.828 23:23:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:27.828 23:23:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:27.828 23:23:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.828 23:23:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.828 23:23:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.828 23:23:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:27.828 23:23:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.828 23:23:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.828 23:23:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.828 23:23:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.828 23:23:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.828 23:23:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.828 23:23:33 -- paths/export.sh@5 -- # export PATH 00:24:27.829 23:23:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.829 23:23:33 -- nvmf/common.sh@46 -- # : 0 00:24:27.829 23:23:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:27.829 23:23:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:27.829 23:23:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:27.829 23:23:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.829 23:23:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.829 23:23:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:27.829 23:23:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:27.829 23:23:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:27.829 23:23:33 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.829 23:23:33 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.829 23:23:33 -- host/identify.sh@14 -- # nvmftestinit 00:24:27.829 23:23:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:27.829 23:23:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.829 23:23:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:27.829 23:23:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:27.829 23:23:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:27.829 23:23:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.829 23:23:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.829 23:23:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.829 23:23:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:27.829 23:23:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:27.829 23:23:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:27.829 23:23:33 -- common/autotest_common.sh@10 -- # set +x 00:24:34.398 23:23:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.398 23:23:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:34.398 23:23:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:34.398 23:23:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:34.398 23:23:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:34.398 23:23:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:34.398 23:23:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:34.398 23:23:39 -- nvmf/common.sh@294 -- # net_devs=() 00:24:34.398 23:23:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:34.398 23:23:39 -- nvmf/common.sh@295 -- # e810=() 00:24:34.398 23:23:39 -- nvmf/common.sh@295 -- # local -ga e810 00:24:34.398 23:23:39 -- nvmf/common.sh@296 -- # x722=() 00:24:34.398 23:23:39 -- nvmf/common.sh@296 -- # local -ga x722 00:24:34.398 23:23:39 -- nvmf/common.sh@297 -- # mlx=() 00:24:34.398 23:23:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:34.398 23:23:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.398 23:23:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:34.398 23:23:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:34.398 23:23:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:34.398 23:23:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:34.398 23:23:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:34.398 23:23:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.398 23:23:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:34.398 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:34.398 23:23:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:34.398 23:23:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.398 23:23:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:34.398 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:34.398 23:23:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:34.398 23:23:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:34.398 23:23:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:34.398 23:23:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.398 23:23:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.398 23:23:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.398 23:23:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.398 23:23:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:34.398 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:34.398 23:23:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.398 23:23:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.398 23:23:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.398 23:23:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.398 23:23:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.398 23:23:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:34.398 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:34.398 23:23:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.398 23:23:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:34.398 23:23:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:34.398 23:23:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:34.399 23:23:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:34.399 23:23:39 -- nvmf/common.sh@57 -- # uname 00:24:34.399 23:23:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:34.399 23:23:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:34.399 23:23:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:34.399 23:23:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:34.399 23:23:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:34.399 23:23:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:34.399 23:23:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:34.399 23:23:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:34.399 23:23:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:34.399 23:23:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:34.399 23:23:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:34.399 23:23:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:34.399 23:23:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:34.399 23:23:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:34.399 23:23:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:34.399 23:23:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:34.399 23:23:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:34.399 23:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.399 23:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:34.399 23:23:39 -- nvmf/common.sh@104 -- # continue 2 00:24:34.399 23:23:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:34.399 23:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.399 23:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.399 23:23:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:34.399 23:23:39 -- nvmf/common.sh@104 -- # continue 2 00:24:34.399 23:23:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:34.399 23:23:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:34.399 23:23:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:34.399 23:23:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:34.399 23:23:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:34.399 23:23:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:34.399 23:23:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:34.399 23:23:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:34.399 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:34.399 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:34.399 altname enp217s0f0np0 00:24:34.399 altname ens818f0np0 00:24:34.399 inet 192.168.100.8/24 scope global mlx_0_0 00:24:34.399 valid_lft forever preferred_lft forever 00:24:34.399 23:23:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:34.399 23:23:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:34.399 23:23:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:34.399 23:23:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:34.399 23:23:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:34.399 23:23:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:34.399 23:23:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:34.399 23:23:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:34.399 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:34.399 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:34.399 altname enp217s0f1np1 00:24:34.399 altname ens818f1np1 00:24:34.399 inet 192.168.100.9/24 scope global mlx_0_1 00:24:34.399 valid_lft forever preferred_lft forever 00:24:34.399 23:23:39 -- nvmf/common.sh@410 -- # return 0 00:24:34.399 23:23:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:34.399 23:23:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:34.399 23:23:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:34.399 23:23:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:34.399 23:23:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:34.399 23:23:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:34.399 23:23:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:34.399 23:23:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:34.399 23:23:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:34.399 23:23:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:34.399 23:23:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:34.399 23:23:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.399 23:23:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:34.399 23:23:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:34.399 23:23:40 -- nvmf/common.sh@104 -- # continue 2 00:24:34.399 23:23:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:34.399 23:23:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.399 23:23:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:34.399 23:23:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.399 23:23:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:34.399 23:23:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:34.399 23:23:40 -- nvmf/common.sh@104 -- # continue 2 00:24:34.399 23:23:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:34.399 23:23:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:34.399 23:23:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:34.399 23:23:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:34.399 23:23:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:34.399 23:23:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:34.399 23:23:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:34.399 23:23:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:34.399 23:23:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:34.399 23:23:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:34.399 23:23:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:34.399 23:23:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:34.399 23:23:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:34.399 192.168.100.9' 00:24:34.399 23:23:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:34.399 192.168.100.9' 00:24:34.399 23:23:40 -- nvmf/common.sh@445 -- # head -n 1 00:24:34.399 23:23:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:34.399 23:23:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:34.399 192.168.100.9' 00:24:34.399 23:23:40 -- nvmf/common.sh@446 -- # tail -n +2 00:24:34.399 23:23:40 -- nvmf/common.sh@446 -- # head -n 1 00:24:34.399 23:23:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:34.399 23:23:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:34.399 23:23:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:34.399 23:23:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:34.399 23:23:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:34.399 23:23:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:34.399 23:23:40 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:34.399 23:23:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:34.399 23:23:40 -- common/autotest_common.sh@10 -- # set +x 00:24:34.399 23:23:40 -- host/identify.sh@19 -- # nvmfpid=718120 00:24:34.399 23:23:40 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.399 23:23:40 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.399 23:23:40 -- host/identify.sh@23 -- # waitforlisten 718120 00:24:34.399 23:23:40 -- common/autotest_common.sh@819 -- # '[' -z 718120 ']' 00:24:34.399 23:23:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.399 23:23:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:34.399 23:23:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.399 23:23:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:34.399 23:23:40 -- common/autotest_common.sh@10 -- # set +x 00:24:34.399 [2024-11-02 23:23:40.132148] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:34.399 [2024-11-02 23:23:40.132195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.658 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.658 [2024-11-02 23:23:40.202741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.658 [2024-11-02 23:23:40.272780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:34.658 [2024-11-02 23:23:40.272939] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.658 [2024-11-02 23:23:40.272949] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.658 [2024-11-02 23:23:40.272959] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.658 [2024-11-02 23:23:40.273016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.658 [2024-11-02 23:23:40.273130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.658 [2024-11-02 23:23:40.273195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.658 [2024-11-02 23:23:40.273197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.225 23:23:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:35.225 23:23:40 -- common/autotest_common.sh@852 -- # return 0 00:24:35.225 23:23:40 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:35.225 23:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.225 23:23:40 -- common/autotest_common.sh@10 -- # set +x 00:24:35.225 [2024-11-02 23:23:40.973338] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa08090/0xa0c580) succeed. 00:24:35.485 [2024-11-02 23:23:40.982695] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa09680/0xa4dc20) succeed. 00:24:35.485 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.485 23:23:41 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:35.485 23:23:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:35.485 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 23:23:41 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:35.485 23:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.485 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 Malloc0 00:24:35.485 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.485 23:23:41 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.485 23:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.485 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.485 23:23:41 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:35.485 23:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.485 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.485 23:23:41 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:35.485 23:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.485 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 [2024-11-02 23:23:41.193341] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:35.485 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.485 23:23:41 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:35.485 23:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.485 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.485 23:23:41 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:35.485 23:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.485 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:35.485 [2024-11-02 23:23:41.209009] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:35.485 [ 00:24:35.485 { 00:24:35.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:35.485 "subtype": "Discovery", 00:24:35.485 "listen_addresses": [ 00:24:35.485 { 00:24:35.485 "transport": "RDMA", 00:24:35.485 "trtype": "RDMA", 00:24:35.485 "adrfam": "IPv4", 00:24:35.485 "traddr": "192.168.100.8", 00:24:35.485 "trsvcid": "4420" 00:24:35.485 } 00:24:35.485 ], 00:24:35.485 "allow_any_host": true, 00:24:35.485 "hosts": [] 00:24:35.485 }, 00:24:35.485 { 00:24:35.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.485 "subtype": "NVMe", 00:24:35.485 "listen_addresses": [ 00:24:35.485 { 00:24:35.485 "transport": "RDMA", 00:24:35.485 "trtype": "RDMA", 00:24:35.485 "adrfam": "IPv4", 00:24:35.485 "traddr": "192.168.100.8", 00:24:35.485 "trsvcid": "4420" 00:24:35.485 } 00:24:35.485 ], 00:24:35.485 "allow_any_host": true, 00:24:35.485 "hosts": [], 00:24:35.485 "serial_number": "SPDK00000000000001", 00:24:35.485 "model_number": "SPDK bdev Controller", 00:24:35.485 "max_namespaces": 32, 00:24:35.485 "min_cntlid": 1, 00:24:35.485 "max_cntlid": 65519, 00:24:35.485 "namespaces": [ 00:24:35.485 { 00:24:35.485 "nsid": 1, 00:24:35.485 "bdev_name": "Malloc0", 00:24:35.485 "name": "Malloc0", 00:24:35.485 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:35.485 "eui64": "ABCDEF0123456789", 00:24:35.485 "uuid": "2b2d19bc-8015-4e87-b4f0-b4d1b46c19a4" 00:24:35.485 } 00:24:35.485 ] 00:24:35.485 } 00:24:35.485 ] 00:24:35.485 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.485 23:23:41 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:35.751 [2024-11-02 23:23:41.250184] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:35.751 [2024-11-02 23:23:41.250229] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718200 ] 00:24:35.751 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.751 [2024-11-02 23:23:41.299096] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:35.751 [2024-11-02 23:23:41.299171] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:35.751 [2024-11-02 23:23:41.299190] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:35.751 [2024-11-02 23:23:41.299195] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:35.751 [2024-11-02 23:23:41.299226] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:35.751 [2024-11-02 23:23:41.310484] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:35.751 [2024-11-02 23:23:41.320561] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:35.751 [2024-11-02 23:23:41.320573] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:35.751 [2024-11-02 23:23:41.320581] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320588] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320594] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320600] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320607] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320616] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320622] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320628] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320634] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320640] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320646] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320653] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320659] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320665] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320671] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320677] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320683] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320689] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320696] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320702] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320708] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320714] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320720] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320726] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320732] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:35.751 [2024-11-02 23:23:41.320738] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.320745] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.320751] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.320757] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.320763] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.320769] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.320775] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:35.752 [2024-11-02 23:23:41.320781] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:35.752 [2024-11-02 23:23:41.320785] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:35.752 [2024-11-02 23:23:41.320807] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.320821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:35.752 [2024-11-02 23:23:41.325974] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.325989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.325997] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326006] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:35.752 [2024-11-02 23:23:41.326013] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:35.752 [2024-11-02 23:23:41.326020] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:35.752 [2024-11-02 23:23:41.326034] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.752 [2024-11-02 23:23:41.326068] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326081] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:35.752 [2024-11-02 23:23:41.326087] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326094] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:35.752 [2024-11-02 23:23:41.326101] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.752 [2024-11-02 23:23:41.326134] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326146] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:35.752 [2024-11-02 23:23:41.326152] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326159] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:35.752 [2024-11-02 23:23:41.326167] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.752 [2024-11-02 23:23:41.326197] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326209] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:35.752 [2024-11-02 23:23:41.326215] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326224] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.752 [2024-11-02 23:23:41.326249] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326263] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:35.752 [2024-11-02 23:23:41.326269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:35.752 [2024-11-02 23:23:41.326275] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326282] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:35.752 [2024-11-02 23:23:41.326389] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:35.752 [2024-11-02 23:23:41.326394] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:35.752 [2024-11-02 23:23:41.326405] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.752 [2024-11-02 23:23:41.326428] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:35.752 [2024-11-02 23:23:41.326446] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326454] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.752 [2024-11-02 23:23:41.326481] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326493] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:35.752 [2024-11-02 23:23:41.326499] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:35.752 [2024-11-02 23:23:41.326505] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326512] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:35.752 [2024-11-02 23:23:41.326520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:35.752 [2024-11-02 23:23:41.326530] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:35.752 [2024-11-02 23:23:41.326574] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326590] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:35.752 [2024-11-02 23:23:41.326598] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:35.752 [2024-11-02 23:23:41.326605] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:35.752 [2024-11-02 23:23:41.326612] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:35.752 [2024-11-02 23:23:41.326617] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:35.752 [2024-11-02 23:23:41.326623] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:35.752 [2024-11-02 23:23:41.326629] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326640] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:35.752 [2024-11-02 23:23:41.326648] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.752 [2024-11-02 23:23:41.326678] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.752 [2024-11-02 23:23:41.326684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:35.752 [2024-11-02 23:23:41.326693] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.752 [2024-11-02 23:23:41.326707] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:35.752 [2024-11-02 23:23:41.326714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.752 [2024-11-02 23:23:41.326721] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.753 [2024-11-02 23:23:41.326734] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.753 [2024-11-02 23:23:41.326747] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:35.753 [2024-11-02 23:23:41.326753] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326764] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:35.753 [2024-11-02 23:23:41.326771] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.753 [2024-11-02 23:23:41.326795] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.753 [2024-11-02 23:23:41.326800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:35.753 [2024-11-02 23:23:41.326807] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:35.753 [2024-11-02 23:23:41.326815] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:35.753 [2024-11-02 23:23:41.326821] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326830] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:35.753 [2024-11-02 23:23:41.326863] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.753 [2024-11-02 23:23:41.326869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:35.753 [2024-11-02 23:23:41.326876] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326887] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:35.753 [2024-11-02 23:23:41.326909] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183d00 00:24:35.753 [2024-11-02 23:23:41.326925] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.753 [2024-11-02 23:23:41.326947] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.753 [2024-11-02 23:23:41.326952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:35.753 [2024-11-02 23:23:41.326963] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183d00 00:24:35.753 [2024-11-02 23:23:41.326982] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.326988] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.753 [2024-11-02 23:23:41.326994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:35.753 [2024-11-02 23:23:41.327000] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.327010] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.753 [2024-11-02 23:23:41.327015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:35.753 [2024-11-02 23:23:41.327025] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.327032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183d00 00:24:35.753 [2024-11-02 23:23:41.327038] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:35.753 [2024-11-02 23:23:41.327057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.753 [2024-11-02 23:23:41.327063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:35.753 [2024-11-02 23:23:41.327074] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:35.753 ===================================================== 00:24:35.753 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:35.753 ===================================================== 00:24:35.753 Controller Capabilities/Features 00:24:35.753 ================================ 00:24:35.753 Vendor ID: 0000 00:24:35.753 Subsystem Vendor ID: 0000 00:24:35.753 Serial Number: .................... 00:24:35.753 Model Number: ........................................ 00:24:35.753 Firmware Version: 24.01.1 00:24:35.753 Recommended Arb Burst: 0 00:24:35.753 IEEE OUI Identifier: 00 00 00 00:24:35.753 Multi-path I/O 00:24:35.753 May have multiple subsystem ports: No 00:24:35.753 May have multiple controllers: No 00:24:35.753 Associated with SR-IOV VF: No 00:24:35.753 Max Data Transfer Size: 131072 00:24:35.753 Max Number of Namespaces: 0 00:24:35.753 Max Number of I/O Queues: 1024 00:24:35.753 NVMe Specification Version (VS): 1.3 00:24:35.753 NVMe Specification Version (Identify): 1.3 00:24:35.753 Maximum Queue Entries: 128 00:24:35.753 Contiguous Queues Required: Yes 00:24:35.753 Arbitration Mechanisms Supported 00:24:35.753 Weighted Round Robin: Not Supported 00:24:35.753 Vendor Specific: Not Supported 00:24:35.753 Reset Timeout: 15000 ms 00:24:35.753 Doorbell Stride: 4 bytes 00:24:35.753 NVM Subsystem Reset: Not Supported 00:24:35.753 Command Sets Supported 00:24:35.753 NVM Command Set: Supported 00:24:35.753 Boot Partition: Not Supported 00:24:35.753 Memory Page Size Minimum: 4096 bytes 00:24:35.753 Memory Page Size Maximum: 4096 bytes 00:24:35.753 Persistent Memory Region: Not Supported 00:24:35.753 Optional Asynchronous Events Supported 00:24:35.753 Namespace Attribute Notices: Not Supported 00:24:35.753 Firmware Activation Notices: Not Supported 00:24:35.753 ANA Change Notices: Not Supported 00:24:35.753 PLE Aggregate Log Change Notices: Not Supported 00:24:35.753 LBA Status Info Alert Notices: Not Supported 00:24:35.753 EGE Aggregate Log Change Notices: Not Supported 00:24:35.753 Normal NVM Subsystem Shutdown event: Not Supported 00:24:35.753 Zone Descriptor Change Notices: Not Supported 00:24:35.753 Discovery Log Change Notices: Supported 00:24:35.753 Controller Attributes 00:24:35.753 128-bit Host Identifier: Not Supported 00:24:35.753 Non-Operational Permissive Mode: Not Supported 00:24:35.753 NVM Sets: Not Supported 00:24:35.753 Read Recovery Levels: Not Supported 00:24:35.753 Endurance Groups: Not Supported 00:24:35.753 Predictable Latency Mode: Not Supported 00:24:35.753 Traffic Based Keep ALive: Not Supported 00:24:35.753 Namespace Granularity: Not Supported 00:24:35.753 SQ Associations: Not Supported 00:24:35.753 UUID List: Not Supported 00:24:35.753 Multi-Domain Subsystem: Not Supported 00:24:35.753 Fixed Capacity Management: Not Supported 00:24:35.753 Variable Capacity Management: Not Supported 00:24:35.753 Delete Endurance Group: Not Supported 00:24:35.753 Delete NVM Set: Not Supported 00:24:35.753 Extended LBA Formats Supported: Not Supported 00:24:35.753 Flexible Data Placement Supported: Not Supported 00:24:35.753 00:24:35.753 Controller Memory Buffer Support 00:24:35.753 ================================ 00:24:35.753 Supported: No 00:24:35.753 00:24:35.753 Persistent Memory Region Support 00:24:35.753 ================================ 00:24:35.753 Supported: No 00:24:35.753 00:24:35.753 Admin Command Set Attributes 00:24:35.753 ============================ 00:24:35.753 Security Send/Receive: Not Supported 00:24:35.753 Format NVM: Not Supported 00:24:35.753 Firmware Activate/Download: Not Supported 00:24:35.753 Namespace Management: Not Supported 00:24:35.753 Device Self-Test: Not Supported 00:24:35.753 Directives: Not Supported 00:24:35.753 NVMe-MI: Not Supported 00:24:35.753 Virtualization Management: Not Supported 00:24:35.753 Doorbell Buffer Config: Not Supported 00:24:35.753 Get LBA Status Capability: Not Supported 00:24:35.753 Command & Feature Lockdown Capability: Not Supported 00:24:35.753 Abort Command Limit: 1 00:24:35.753 Async Event Request Limit: 4 00:24:35.753 Number of Firmware Slots: N/A 00:24:35.753 Firmware Slot 1 Read-Only: N/A 00:24:35.753 Firmware Activation Without Reset: N/A 00:24:35.753 Multiple Update Detection Support: N/A 00:24:35.754 Firmware Update Granularity: No Information Provided 00:24:35.754 Per-Namespace SMART Log: No 00:24:35.754 Asymmetric Namespace Access Log Page: Not Supported 00:24:35.754 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:35.754 Command Effects Log Page: Not Supported 00:24:35.754 Get Log Page Extended Data: Supported 00:24:35.754 Telemetry Log Pages: Not Supported 00:24:35.754 Persistent Event Log Pages: Not Supported 00:24:35.754 Supported Log Pages Log Page: May Support 00:24:35.754 Commands Supported & Effects Log Page: Not Supported 00:24:35.754 Feature Identifiers & Effects Log Page:May Support 00:24:35.754 NVMe-MI Commands & Effects Log Page: May Support 00:24:35.754 Data Area 4 for Telemetry Log: Not Supported 00:24:35.754 Error Log Page Entries Supported: 128 00:24:35.754 Keep Alive: Not Supported 00:24:35.754 00:24:35.754 NVM Command Set Attributes 00:24:35.754 ========================== 00:24:35.754 Submission Queue Entry Size 00:24:35.754 Max: 1 00:24:35.754 Min: 1 00:24:35.754 Completion Queue Entry Size 00:24:35.754 Max: 1 00:24:35.754 Min: 1 00:24:35.754 Number of Namespaces: 0 00:24:35.754 Compare Command: Not Supported 00:24:35.754 Write Uncorrectable Command: Not Supported 00:24:35.754 Dataset Management Command: Not Supported 00:24:35.754 Write Zeroes Command: Not Supported 00:24:35.754 Set Features Save Field: Not Supported 00:24:35.754 Reservations: Not Supported 00:24:35.754 Timestamp: Not Supported 00:24:35.754 Copy: Not Supported 00:24:35.754 Volatile Write Cache: Not Present 00:24:35.754 Atomic Write Unit (Normal): 1 00:24:35.754 Atomic Write Unit (PFail): 1 00:24:35.754 Atomic Compare & Write Unit: 1 00:24:35.754 Fused Compare & Write: Supported 00:24:35.754 Scatter-Gather List 00:24:35.754 SGL Command Set: Supported 00:24:35.754 SGL Keyed: Supported 00:24:35.754 SGL Bit Bucket Descriptor: Not Supported 00:24:35.754 SGL Metadata Pointer: Not Supported 00:24:35.754 Oversized SGL: Not Supported 00:24:35.754 SGL Metadata Address: Not Supported 00:24:35.754 SGL Offset: Supported 00:24:35.754 Transport SGL Data Block: Not Supported 00:24:35.754 Replay Protected Memory Block: Not Supported 00:24:35.754 00:24:35.754 Firmware Slot Information 00:24:35.754 ========================= 00:24:35.754 Active slot: 0 00:24:35.754 00:24:35.754 00:24:35.754 Error Log 00:24:35.754 ========= 00:24:35.754 00:24:35.754 Active Namespaces 00:24:35.754 ================= 00:24:35.754 Discovery Log Page 00:24:35.754 ================== 00:24:35.754 Generation Counter: 2 00:24:35.754 Number of Records: 2 00:24:35.754 Record Format: 0 00:24:35.754 00:24:35.754 Discovery Log Entry 0 00:24:35.754 ---------------------- 00:24:35.754 Transport Type: 1 (RDMA) 00:24:35.754 Address Family: 1 (IPv4) 00:24:35.754 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:35.754 Entry Flags: 00:24:35.754 Duplicate Returned Information: 1 00:24:35.754 Explicit Persistent Connection Support for Discovery: 1 00:24:35.754 Transport Requirements: 00:24:35.754 Secure Channel: Not Required 00:24:35.754 Port ID: 0 (0x0000) 00:24:35.754 Controller ID: 65535 (0xffff) 00:24:35.754 Admin Max SQ Size: 128 00:24:35.754 Transport Service Identifier: 4420 00:24:35.754 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:35.754 Transport Address: 192.168.100.8 00:24:35.754 Transport Specific Address Subtype - RDMA 00:24:35.754 RDMA QP Service Type: 1 (Reliable Connected) 00:24:35.754 RDMA Provider Type: 1 (No provider specified) 00:24:35.754 RDMA CM Service: 1 (RDMA_CM) 00:24:35.754 Discovery Log Entry 1 00:24:35.754 ---------------------- 00:24:35.754 Transport Type: 1 (RDMA) 00:24:35.754 Address Family: 1 (IPv4) 00:24:35.754 Subsystem Type: 2 (NVM Subsystem) 00:24:35.754 Entry Flags: 00:24:35.754 Duplicate Returned Information: 0 00:24:35.754 Explicit Persistent Connection Support for Discovery: 0 00:24:35.754 Transport Requirements: 00:24:35.754 Secure Channel: Not Required 00:24:35.754 Port ID: 0 (0x0000) 00:24:35.754 Controller ID: 65535 (0xffff) 00:24:35.754 Admin Max SQ Size: [2024-11-02 23:23:41.327148] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:35.754 [2024-11-02 23:23:41.327159] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3008 doesn't match qid 00:24:35.754 [2024-11-02 23:23:41.327173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327179] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3008 doesn't match qid 00:24:35.754 [2024-11-02 23:23:41.327187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327194] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3008 doesn't match qid 00:24:35.754 [2024-11-02 23:23:41.327201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327208] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3008 doesn't match qid 00:24:35.754 [2024-11-02 23:23:41.327215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327224] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.754 [2024-11-02 23:23:41.327254] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.754 [2024-11-02 23:23:41.327259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327267] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.754 [2024-11-02 23:23:41.327281] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327296] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.754 [2024-11-02 23:23:41.327301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327308] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:35.754 [2024-11-02 23:23:41.327314] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:35.754 [2024-11-02 23:23:41.327320] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327329] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.754 [2024-11-02 23:23:41.327351] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.754 [2024-11-02 23:23:41.327357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327364] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327373] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.754 [2024-11-02 23:23:41.327400] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.754 [2024-11-02 23:23:41.327408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327414] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327423] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.754 [2024-11-02 23:23:41.327447] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.754 [2024-11-02 23:23:41.327452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327459] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327467] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.754 [2024-11-02 23:23:41.327497] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.754 [2024-11-02 23:23:41.327503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:35.754 [2024-11-02 23:23:41.327510] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327518] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.754 [2024-11-02 23:23:41.327526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327544] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327556] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327565] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327589] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327602] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327610] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327636] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327648] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327656] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327680] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327692] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327701] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327724] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327736] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327745] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327768] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327780] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327788] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327812] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327824] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327832] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327856] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327868] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327876] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327898] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327910] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327918] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327947] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.327953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.327959] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327974] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.327982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.327998] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328010] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328019] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328042] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328054] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328063] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328090] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328102] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328110] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328139] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328151] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328160] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328185] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328197] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328206] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328235] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328247] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328255] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328286] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328298] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328307] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328330] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.755 [2024-11-02 23:23:41.328335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:35.755 [2024-11-02 23:23:41.328342] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328350] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.755 [2024-11-02 23:23:41.328358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.755 [2024-11-02 23:23:41.328374] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328386] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328394] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328420] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328431] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328440] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328465] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328477] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328486] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328509] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328521] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328529] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328552] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328564] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328573] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328598] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328610] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328619] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328644] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328656] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328664] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328689] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328701] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328710] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328733] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328745] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328754] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328786] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328798] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328807] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328832] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328844] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328852] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328880] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328892] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328900] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328925] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328937] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328946] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.328971] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.328977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.328984] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.328992] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.329000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.329013] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.329019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.329025] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.329035] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.756 [2024-11-02 23:23:41.329043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.756 [2024-11-02 23:23:41.329061] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.756 [2024-11-02 23:23:41.329066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:35.756 [2024-11-02 23:23:41.329073] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329081] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329105] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329117] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329125] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329147] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329159] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329167] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329191] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329203] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329211] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329239] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329251] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329259] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329282] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329294] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329304] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329328] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329340] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329349] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329372] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329384] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329393] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329422] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329434] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329442] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329473] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329485] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329494] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329521] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329533] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329542] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329565] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329578] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329587] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329616] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329628] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329637] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329660] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329672] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329681] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329704] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329716] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329724] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329750] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329762] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329770] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329801] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:35.757 [2024-11-02 23:23:41.329813] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329822] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.757 [2024-11-02 23:23:41.329829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.757 [2024-11-02 23:23:41.329845] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.757 [2024-11-02 23:23:41.329851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:35.758 [2024-11-02 23:23:41.329858] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.329867] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.329875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.758 [2024-11-02 23:23:41.329892] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.758 [2024-11-02 23:23:41.329898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:35.758 [2024-11-02 23:23:41.329905] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.329913] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.329921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.758 [2024-11-02 23:23:41.329935] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.758 [2024-11-02 23:23:41.329941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:35.758 [2024-11-02 23:23:41.329947] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.329956] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.329963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.758 [2024-11-02 23:23:41.333981] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.758 [2024-11-02 23:23:41.333988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:35.758 [2024-11-02 23:23:41.333994] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.334003] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.334011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.758 [2024-11-02 23:23:41.334027] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.758 [2024-11-02 23:23:41.334033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:24:35.758 [2024-11-02 23:23:41.334040] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.334047] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:35.758 128 00:24:35.758 Transport Service Identifier: 4420 00:24:35.758 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:35.758 Transport Address: 192.168.100.8 00:24:35.758 Transport Specific Address Subtype - RDMA 00:24:35.758 RDMA QP Service Type: 1 (Reliable Connected) 00:24:35.758 RDMA Provider Type: 1 (No provider specified) 00:24:35.758 RDMA CM Service: 1 (RDMA_CM) 00:24:35.758 23:23:41 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:35.758 [2024-11-02 23:23:41.407207] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:35.758 [2024-11-02 23:23:41.407263] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718275 ] 00:24:35.758 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.758 [2024-11-02 23:23:41.454169] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:35.758 [2024-11-02 23:23:41.454234] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:35.758 [2024-11-02 23:23:41.454257] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:35.758 [2024-11-02 23:23:41.454261] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:35.758 [2024-11-02 23:23:41.454287] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:35.758 [2024-11-02 23:23:41.464447] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:35.758 [2024-11-02 23:23:41.474514] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:35.758 [2024-11-02 23:23:41.474525] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:35.758 [2024-11-02 23:23:41.474533] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474540] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474546] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474552] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474558] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474564] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474570] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474576] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474582] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474588] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474594] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474600] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474606] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474612] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474618] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474624] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474630] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474636] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474642] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474648] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474654] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474663] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474669] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474676] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474682] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474688] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474694] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474700] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474706] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474712] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474718] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474723] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:35.758 [2024-11-02 23:23:41.474728] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:35.758 [2024-11-02 23:23:41.474733] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:35.758 [2024-11-02 23:23:41.474747] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.474759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:35.758 [2024-11-02 23:23:41.479974] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.758 [2024-11-02 23:23:41.479983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:35.758 [2024-11-02 23:23:41.479991] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.479998] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:35.758 [2024-11-02 23:23:41.480004] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:35.758 [2024-11-02 23:23:41.480011] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:35.758 [2024-11-02 23:23:41.480023] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.480031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.758 [2024-11-02 23:23:41.480048] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.758 [2024-11-02 23:23:41.480054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:35.758 [2024-11-02 23:23:41.480060] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:35.758 [2024-11-02 23:23:41.480066] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.758 [2024-11-02 23:23:41.480073] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:35.759 [2024-11-02 23:23:41.480080] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480106] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480118] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:35.759 [2024-11-02 23:23:41.480124] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480131] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:35.759 [2024-11-02 23:23:41.480139] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480166] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480178] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:35.759 [2024-11-02 23:23:41.480184] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480192] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480218] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480229] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:35.759 [2024-11-02 23:23:41.480235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:35.759 [2024-11-02 23:23:41.480241] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480248] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:35.759 [2024-11-02 23:23:41.480354] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:35.759 [2024-11-02 23:23:41.480360] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:35.759 [2024-11-02 23:23:41.480368] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480396] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480408] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:35.759 [2024-11-02 23:23:41.480413] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480422] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480453] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480464] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:35.759 [2024-11-02 23:23:41.480470] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480476] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480483] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:35.759 [2024-11-02 23:23:41.480495] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480505] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:35.759 [2024-11-02 23:23:41.480549] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480563] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:35.759 [2024-11-02 23:23:41.480569] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:35.759 [2024-11-02 23:23:41.480575] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:35.759 [2024-11-02 23:23:41.480580] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:35.759 [2024-11-02 23:23:41.480586] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:35.759 [2024-11-02 23:23:41.480592] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480597] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480607] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480614] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480640] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480655] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.759 [2024-11-02 23:23:41.480669] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.759 [2024-11-02 23:23:41.480684] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.759 [2024-11-02 23:23:41.480698] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.759 [2024-11-02 23:23:41.480710] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480716] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480726] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480733] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480761] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480773] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:35.759 [2024-11-02 23:23:41.480779] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480785] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480801] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480808] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.759 [2024-11-02 23:23:41.480836] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.759 [2024-11-02 23:23:41.480841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:24:35.759 [2024-11-02 23:23:41.480889] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480896] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480904] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:35.759 [2024-11-02 23:23:41.480912] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.759 [2024-11-02 23:23:41.480920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183d00 00:24:35.759 [2024-11-02 23:23:41.480940] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.480947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.480963] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:35.760 [2024-11-02 23:23:41.480981] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.480987] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.480995] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481004] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:35.760 [2024-11-02 23:23:41.481043] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481068] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481075] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481084] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:35.760 [2024-11-02 23:23:41.481113] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481127] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481133] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481141] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481149] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481156] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481163] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481169] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:35.760 [2024-11-02 23:23:41.481174] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:35.760 [2024-11-02 23:23:41.481180] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:35.760 [2024-11-02 23:23:41.481195] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.760 [2024-11-02 23:23:41.481212] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.760 [2024-11-02 23:23:41.481229] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481241] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481247] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481259] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481268] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.760 [2024-11-02 23:23:41.481292] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481304] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481313] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.760 [2024-11-02 23:23:41.481337] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481349] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481358] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.760 [2024-11-02 23:23:41.481382] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481393] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481404] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183d00 00:24:35.760 [2024-11-02 23:23:41.481420] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183d00 00:24:35.760 [2024-11-02 23:23:41.481436] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183d00 00:24:35.760 [2024-11-02 23:23:41.481454] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183d00 00:24:35.760 [2024-11-02 23:23:41.481470] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481490] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481497] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481511] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481517] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481530] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:35.760 [2024-11-02 23:23:41.481535] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.760 [2024-11-02 23:23:41.481541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:35.760 [2024-11-02 23:23:41.481551] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:35.760 ===================================================== 00:24:35.760 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.760 ===================================================== 00:24:35.760 Controller Capabilities/Features 00:24:35.761 ================================ 00:24:35.761 Vendor ID: 8086 00:24:35.761 Subsystem Vendor ID: 8086 00:24:35.761 Serial Number: SPDK00000000000001 00:24:35.761 Model Number: SPDK bdev Controller 00:24:35.761 Firmware Version: 24.01.1 00:24:35.761 Recommended Arb Burst: 6 00:24:35.761 IEEE OUI Identifier: e4 d2 5c 00:24:35.761 Multi-path I/O 00:24:35.761 May have multiple subsystem ports: Yes 00:24:35.761 May have multiple controllers: Yes 00:24:35.761 Associated with SR-IOV VF: No 00:24:35.761 Max Data Transfer Size: 131072 00:24:35.761 Max Number of Namespaces: 32 00:24:35.761 Max Number of I/O Queues: 127 00:24:35.761 NVMe Specification Version (VS): 1.3 00:24:35.761 NVMe Specification Version (Identify): 1.3 00:24:35.761 Maximum Queue Entries: 128 00:24:35.761 Contiguous Queues Required: Yes 00:24:35.761 Arbitration Mechanisms Supported 00:24:35.761 Weighted Round Robin: Not Supported 00:24:35.761 Vendor Specific: Not Supported 00:24:35.761 Reset Timeout: 15000 ms 00:24:35.761 Doorbell Stride: 4 bytes 00:24:35.761 NVM Subsystem Reset: Not Supported 00:24:35.761 Command Sets Supported 00:24:35.761 NVM Command Set: Supported 00:24:35.761 Boot Partition: Not Supported 00:24:35.761 Memory Page Size Minimum: 4096 bytes 00:24:35.761 Memory Page Size Maximum: 4096 bytes 00:24:35.761 Persistent Memory Region: Not Supported 00:24:35.761 Optional Asynchronous Events Supported 00:24:35.761 Namespace Attribute Notices: Supported 00:24:35.761 Firmware Activation Notices: Not Supported 00:24:35.761 ANA Change Notices: Not Supported 00:24:35.761 PLE Aggregate Log Change Notices: Not Supported 00:24:35.761 LBA Status Info Alert Notices: Not Supported 00:24:35.761 EGE Aggregate Log Change Notices: Not Supported 00:24:35.761 Normal NVM Subsystem Shutdown event: Not Supported 00:24:35.761 Zone Descriptor Change Notices: Not Supported 00:24:35.761 Discovery Log Change Notices: Not Supported 00:24:35.761 Controller Attributes 00:24:35.761 128-bit Host Identifier: Supported 00:24:35.761 Non-Operational Permissive Mode: Not Supported 00:24:35.761 NVM Sets: Not Supported 00:24:35.761 Read Recovery Levels: Not Supported 00:24:35.761 Endurance Groups: Not Supported 00:24:35.761 Predictable Latency Mode: Not Supported 00:24:35.761 Traffic Based Keep ALive: Not Supported 00:24:35.761 Namespace Granularity: Not Supported 00:24:35.761 SQ Associations: Not Supported 00:24:35.761 UUID List: Not Supported 00:24:35.761 Multi-Domain Subsystem: Not Supported 00:24:35.761 Fixed Capacity Management: Not Supported 00:24:35.761 Variable Capacity Management: Not Supported 00:24:35.761 Delete Endurance Group: Not Supported 00:24:35.761 Delete NVM Set: Not Supported 00:24:35.761 Extended LBA Formats Supported: Not Supported 00:24:35.761 Flexible Data Placement Supported: Not Supported 00:24:35.761 00:24:35.761 Controller Memory Buffer Support 00:24:35.761 ================================ 00:24:35.761 Supported: No 00:24:35.761 00:24:35.761 Persistent Memory Region Support 00:24:35.761 ================================ 00:24:35.761 Supported: No 00:24:35.761 00:24:35.761 Admin Command Set Attributes 00:24:35.761 ============================ 00:24:35.761 Security Send/Receive: Not Supported 00:24:35.761 Format NVM: Not Supported 00:24:35.761 Firmware Activate/Download: Not Supported 00:24:35.761 Namespace Management: Not Supported 00:24:35.761 Device Self-Test: Not Supported 00:24:35.761 Directives: Not Supported 00:24:35.761 NVMe-MI: Not Supported 00:24:35.761 Virtualization Management: Not Supported 00:24:35.761 Doorbell Buffer Config: Not Supported 00:24:35.761 Get LBA Status Capability: Not Supported 00:24:35.761 Command & Feature Lockdown Capability: Not Supported 00:24:35.761 Abort Command Limit: 4 00:24:35.761 Async Event Request Limit: 4 00:24:35.761 Number of Firmware Slots: N/A 00:24:35.761 Firmware Slot 1 Read-Only: N/A 00:24:35.761 Firmware Activation Without Reset: N/A 00:24:35.761 Multiple Update Detection Support: N/A 00:24:35.761 Firmware Update Granularity: No Information Provided 00:24:35.761 Per-Namespace SMART Log: No 00:24:35.761 Asymmetric Namespace Access Log Page: Not Supported 00:24:35.761 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:35.761 Command Effects Log Page: Supported 00:24:35.761 Get Log Page Extended Data: Supported 00:24:35.761 Telemetry Log Pages: Not Supported 00:24:35.761 Persistent Event Log Pages: Not Supported 00:24:35.761 Supported Log Pages Log Page: May Support 00:24:35.761 Commands Supported & Effects Log Page: Not Supported 00:24:35.761 Feature Identifiers & Effects Log Page:May Support 00:24:35.761 NVMe-MI Commands & Effects Log Page: May Support 00:24:35.761 Data Area 4 for Telemetry Log: Not Supported 00:24:35.761 Error Log Page Entries Supported: 128 00:24:35.761 Keep Alive: Supported 00:24:35.761 Keep Alive Granularity: 10000 ms 00:24:35.761 00:24:35.761 NVM Command Set Attributes 00:24:35.761 ========================== 00:24:35.761 Submission Queue Entry Size 00:24:35.761 Max: 64 00:24:35.761 Min: 64 00:24:35.761 Completion Queue Entry Size 00:24:35.761 Max: 16 00:24:35.761 Min: 16 00:24:35.761 Number of Namespaces: 32 00:24:35.761 Compare Command: Supported 00:24:35.761 Write Uncorrectable Command: Not Supported 00:24:35.761 Dataset Management Command: Supported 00:24:35.761 Write Zeroes Command: Supported 00:24:35.761 Set Features Save Field: Not Supported 00:24:35.761 Reservations: Supported 00:24:35.761 Timestamp: Not Supported 00:24:35.761 Copy: Supported 00:24:35.761 Volatile Write Cache: Present 00:24:35.761 Atomic Write Unit (Normal): 1 00:24:35.761 Atomic Write Unit (PFail): 1 00:24:35.761 Atomic Compare & Write Unit: 1 00:24:35.761 Fused Compare & Write: Supported 00:24:35.761 Scatter-Gather List 00:24:35.761 SGL Command Set: Supported 00:24:35.761 SGL Keyed: Supported 00:24:35.761 SGL Bit Bucket Descriptor: Not Supported 00:24:35.761 SGL Metadata Pointer: Not Supported 00:24:35.761 Oversized SGL: Not Supported 00:24:35.761 SGL Metadata Address: Not Supported 00:24:35.761 SGL Offset: Supported 00:24:35.761 Transport SGL Data Block: Not Supported 00:24:35.761 Replay Protected Memory Block: Not Supported 00:24:35.761 00:24:35.761 Firmware Slot Information 00:24:35.761 ========================= 00:24:35.761 Active slot: 1 00:24:35.761 Slot 1 Firmware Revision: 24.01.1 00:24:35.761 00:24:35.761 00:24:35.761 Commands Supported and Effects 00:24:35.761 ============================== 00:24:35.761 Admin Commands 00:24:35.761 -------------- 00:24:35.761 Get Log Page (02h): Supported 00:24:35.761 Identify (06h): Supported 00:24:35.761 Abort (08h): Supported 00:24:35.761 Set Features (09h): Supported 00:24:35.761 Get Features (0Ah): Supported 00:24:35.761 Asynchronous Event Request (0Ch): Supported 00:24:35.761 Keep Alive (18h): Supported 00:24:35.761 I/O Commands 00:24:35.761 ------------ 00:24:35.761 Flush (00h): Supported LBA-Change 00:24:35.761 Write (01h): Supported LBA-Change 00:24:35.761 Read (02h): Supported 00:24:35.761 Compare (05h): Supported 00:24:35.761 Write Zeroes (08h): Supported LBA-Change 00:24:35.761 Dataset Management (09h): Supported LBA-Change 00:24:35.761 Copy (19h): Supported LBA-Change 00:24:35.761 Unknown (79h): Supported LBA-Change 00:24:35.761 Unknown (7Ah): Supported 00:24:35.761 00:24:35.761 Error Log 00:24:35.761 ========= 00:24:35.761 00:24:35.761 Arbitration 00:24:35.761 =========== 00:24:35.761 Arbitration Burst: 1 00:24:35.761 00:24:35.761 Power Management 00:24:35.761 ================ 00:24:35.761 Number of Power States: 1 00:24:35.761 Current Power State: Power State #0 00:24:35.761 Power State #0: 00:24:35.761 Max Power: 0.00 W 00:24:35.761 Non-Operational State: Operational 00:24:35.761 Entry Latency: Not Reported 00:24:35.761 Exit Latency: Not Reported 00:24:35.761 Relative Read Throughput: 0 00:24:35.761 Relative Read Latency: 0 00:24:35.761 Relative Write Throughput: 0 00:24:35.761 Relative Write Latency: 0 00:24:35.761 Idle Power: Not Reported 00:24:35.761 Active Power: Not Reported 00:24:35.761 Non-Operational Permissive Mode: Not Supported 00:24:35.761 00:24:35.762 Health Information 00:24:35.762 ================== 00:24:35.762 Critical Warnings: 00:24:35.762 Available Spare Space: OK 00:24:35.762 Temperature: OK 00:24:35.762 Device Reliability: OK 00:24:35.762 Read Only: No 00:24:35.762 Volatile Memory Backup: OK 00:24:35.762 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:35.762 Temperature Threshol[2024-11-02 23:23:41.481632] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.481655] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.481661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481667] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481691] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:35.762 [2024-11-02 23:23:41.481700] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 54454 doesn't match qid 00:24:35.762 [2024-11-02 23:23:41.481714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32567 cdw0:5 sqhd:5e28 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481721] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 54454 doesn't match qid 00:24:35.762 [2024-11-02 23:23:41.481729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32567 cdw0:5 sqhd:5e28 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481735] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 54454 doesn't match qid 00:24:35.762 [2024-11-02 23:23:41.481742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32567 cdw0:5 sqhd:5e28 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481748] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 54454 doesn't match qid 00:24:35.762 [2024-11-02 23:23:41.481756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32567 cdw0:5 sqhd:5e28 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481766] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.481791] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.481796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481804] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.481818] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481832] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.481838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481844] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:35.762 [2024-11-02 23:23:41.481850] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:35.762 [2024-11-02 23:23:41.481856] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481865] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.481892] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.481898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481905] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481913] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.481943] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.481949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.481955] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481963] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.481976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.481998] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482010] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482019] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482049] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482061] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482070] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482096] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482108] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482116] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482142] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482155] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482163] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482194] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482206] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482215] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482238] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482250] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482259] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482280] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482292] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482301] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482323] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482335] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482343] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482372] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:35.762 [2024-11-02 23:23:41.482384] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482393] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.762 [2024-11-02 23:23:41.482400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.762 [2024-11-02 23:23:41.482422] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.762 [2024-11-02 23:23:41.482428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482434] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482442] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482473] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482485] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482494] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482523] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482535] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482544] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482569] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482581] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482589] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482614] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482626] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482634] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482663] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482675] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482683] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482705] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482716] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482725] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482750] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482762] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482771] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482792] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482804] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482812] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482840] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482851] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482860] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482890] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482902] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482911] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482938] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482949] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482958] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.482970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.482985] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.482990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.482997] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483005] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.483028] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.483034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.483040] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483049] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.483074] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.483079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.483085] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483094] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.483117] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.483129] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483139] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.483170] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.483175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:35.763 [2024-11-02 23:23:41.483181] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483190] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.763 [2024-11-02 23:23:41.483198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.763 [2024-11-02 23:23:41.483213] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.763 [2024-11-02 23:23:41.483219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483225] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483233] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483260] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483272] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483281] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483306] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483318] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483326] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483350] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483361] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483370] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483399] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483411] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483421] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483446] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483458] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483466] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483491] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483503] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483512] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483539] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483551] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483559] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483588] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483600] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483608] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483633] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483645] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483654] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483679] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483692] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483700] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483729] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483741] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483750] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483771] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483783] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483791] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483813] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483825] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483833] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483860] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483872] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483881] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483904] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483916] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483924] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.483932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.483949] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.483955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.483962] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.487980] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.487988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:35.764 [2024-11-02 23:23:41.488010] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:35.764 [2024-11-02 23:23:41.488016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:24:35.764 [2024-11-02 23:23:41.488022] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:35.764 [2024-11-02 23:23:41.488029] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:36.024 d: 0 Kelvin (-273 Celsius) 00:24:36.024 Available Spare: 0% 00:24:36.024 Available Spare Threshold: 0% 00:24:36.024 Life Percentage Used: 0% 00:24:36.024 Data Units Read: 0 00:24:36.024 Data Units Written: 0 00:24:36.024 Host Read Commands: 0 00:24:36.024 Host Write Commands: 0 00:24:36.024 Controller Busy Time: 0 minutes 00:24:36.024 Power Cycles: 0 00:24:36.024 Power On Hours: 0 hours 00:24:36.024 Unsafe Shutdowns: 0 00:24:36.024 Unrecoverable Media Errors: 0 00:24:36.024 Lifetime Error Log Entries: 0 00:24:36.024 Warning Temperature Time: 0 minutes 00:24:36.024 Critical Temperature Time: 0 minutes 00:24:36.024 00:24:36.024 Number of Queues 00:24:36.024 ================ 00:24:36.024 Number of I/O Submission Queues: 127 00:24:36.024 Number of I/O Completion Queues: 127 00:24:36.024 00:24:36.024 Active Namespaces 00:24:36.024 ================= 00:24:36.024 Namespace ID:1 00:24:36.024 Error Recovery Timeout: Unlimited 00:24:36.024 Command Set Identifier: NVM (00h) 00:24:36.024 Deallocate: Supported 00:24:36.024 Deallocated/Unwritten Error: Not Supported 00:24:36.024 Deallocated Read Value: Unknown 00:24:36.024 Deallocate in Write Zeroes: Not Supported 00:24:36.024 Deallocated Guard Field: 0xFFFF 00:24:36.024 Flush: Supported 00:24:36.024 Reservation: Supported 00:24:36.024 Namespace Sharing Capabilities: Multiple Controllers 00:24:36.024 Size (in LBAs): 131072 (0GiB) 00:24:36.024 Capacity (in LBAs): 131072 (0GiB) 00:24:36.024 Utilization (in LBAs): 131072 (0GiB) 00:24:36.024 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:36.024 EUI64: ABCDEF0123456789 00:24:36.024 UUID: 2b2d19bc-8015-4e87-b4f0-b4d1b46c19a4 00:24:36.024 Thin Provisioning: Not Supported 00:24:36.024 Per-NS Atomic Units: Yes 00:24:36.024 Atomic Boundary Size (Normal): 0 00:24:36.024 Atomic Boundary Size (PFail): 0 00:24:36.024 Atomic Boundary Offset: 0 00:24:36.024 Maximum Single Source Range Length: 65535 00:24:36.024 Maximum Copy Length: 65535 00:24:36.024 Maximum Source Range Count: 1 00:24:36.024 NGUID/EUI64 Never Reused: No 00:24:36.024 Namespace Write Protected: No 00:24:36.024 Number of LBA Formats: 1 00:24:36.024 Current LBA Format: LBA Format #00 00:24:36.024 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:36.024 00:24:36.024 23:23:41 -- host/identify.sh@51 -- # sync 00:24:36.024 23:23:41 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.024 23:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:36.024 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:36.024 23:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:36.024 23:23:41 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:36.024 23:23:41 -- host/identify.sh@56 -- # nvmftestfini 00:24:36.024 23:23:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:36.024 23:23:41 -- nvmf/common.sh@116 -- # sync 00:24:36.024 23:23:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:36.024 23:23:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:36.024 23:23:41 -- nvmf/common.sh@119 -- # set +e 00:24:36.024 23:23:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:36.024 23:23:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:36.024 rmmod nvme_rdma 00:24:36.024 rmmod nvme_fabrics 00:24:36.024 23:23:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:36.024 23:23:41 -- nvmf/common.sh@123 -- # set -e 00:24:36.024 23:23:41 -- nvmf/common.sh@124 -- # return 0 00:24:36.024 23:23:41 -- nvmf/common.sh@477 -- # '[' -n 718120 ']' 00:24:36.024 23:23:41 -- nvmf/common.sh@478 -- # killprocess 718120 00:24:36.024 23:23:41 -- common/autotest_common.sh@926 -- # '[' -z 718120 ']' 00:24:36.024 23:23:41 -- common/autotest_common.sh@930 -- # kill -0 718120 00:24:36.024 23:23:41 -- common/autotest_common.sh@931 -- # uname 00:24:36.024 23:23:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:36.024 23:23:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 718120 00:24:36.024 23:23:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:36.024 23:23:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:36.024 23:23:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 718120' 00:24:36.024 killing process with pid 718120 00:24:36.024 23:23:41 -- common/autotest_common.sh@945 -- # kill 718120 00:24:36.024 [2024-11-02 23:23:41.684074] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:36.024 23:23:41 -- common/autotest_common.sh@950 -- # wait 718120 00:24:36.283 23:23:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:36.283 23:23:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:36.283 00:24:36.283 real 0m8.680s 00:24:36.283 user 0m8.543s 00:24:36.283 sys 0m5.590s 00:24:36.283 23:23:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.283 23:23:41 -- common/autotest_common.sh@10 -- # set +x 00:24:36.283 ************************************ 00:24:36.283 END TEST nvmf_identify 00:24:36.283 ************************************ 00:24:36.283 23:23:42 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:36.283 23:23:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:36.283 23:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:36.283 23:23:42 -- common/autotest_common.sh@10 -- # set +x 00:24:36.283 ************************************ 00:24:36.283 START TEST nvmf_perf 00:24:36.283 ************************************ 00:24:36.283 23:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:36.543 * Looking for test storage... 00:24:36.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:36.543 23:23:42 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.543 23:23:42 -- nvmf/common.sh@7 -- # uname -s 00:24:36.543 23:23:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.543 23:23:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.543 23:23:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.543 23:23:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.543 23:23:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.543 23:23:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.543 23:23:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.543 23:23:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.543 23:23:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.543 23:23:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.543 23:23:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:36.543 23:23:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:36.543 23:23:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.543 23:23:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.543 23:23:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.543 23:23:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:36.543 23:23:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.543 23:23:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.543 23:23:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.543 23:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.543 23:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.543 23:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.543 23:23:42 -- paths/export.sh@5 -- # export PATH 00:24:36.543 23:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.543 23:23:42 -- nvmf/common.sh@46 -- # : 0 00:24:36.543 23:23:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:36.543 23:23:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:36.543 23:23:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:36.543 23:23:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.543 23:23:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.543 23:23:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:36.543 23:23:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:36.543 23:23:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:36.543 23:23:42 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:36.543 23:23:42 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:36.543 23:23:42 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:36.543 23:23:42 -- host/perf.sh@17 -- # nvmftestinit 00:24:36.543 23:23:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:36.543 23:23:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.543 23:23:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:36.543 23:23:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:36.543 23:23:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:36.543 23:23:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.543 23:23:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.543 23:23:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.543 23:23:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:36.543 23:23:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:36.543 23:23:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:36.543 23:23:42 -- common/autotest_common.sh@10 -- # set +x 00:24:43.113 23:23:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:43.113 23:23:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:43.113 23:23:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:43.113 23:23:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:43.113 23:23:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:43.113 23:23:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:43.113 23:23:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:43.113 23:23:48 -- nvmf/common.sh@294 -- # net_devs=() 00:24:43.113 23:23:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:43.113 23:23:48 -- nvmf/common.sh@295 -- # e810=() 00:24:43.113 23:23:48 -- nvmf/common.sh@295 -- # local -ga e810 00:24:43.113 23:23:48 -- nvmf/common.sh@296 -- # x722=() 00:24:43.113 23:23:48 -- nvmf/common.sh@296 -- # local -ga x722 00:24:43.113 23:23:48 -- nvmf/common.sh@297 -- # mlx=() 00:24:43.113 23:23:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:43.113 23:23:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.113 23:23:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.114 23:23:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:43.114 23:23:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:43.114 23:23:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:43.114 23:23:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:43.114 23:23:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:43.114 23:23:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:43.114 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:43.114 23:23:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:43.114 23:23:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:43.114 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:43.114 23:23:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:43.114 23:23:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:43.114 23:23:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.114 23:23:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.114 23:23:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.114 23:23:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:43.114 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.114 23:23:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.114 23:23:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.114 23:23:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.114 23:23:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:43.114 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.114 23:23:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:43.114 23:23:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:43.114 23:23:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:43.114 23:23:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:43.114 23:23:48 -- nvmf/common.sh@57 -- # uname 00:24:43.114 23:23:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:43.114 23:23:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:43.114 23:23:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:43.114 23:23:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:43.114 23:23:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:43.114 23:23:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:43.114 23:23:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:43.114 23:23:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:43.114 23:23:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:43.114 23:23:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:43.114 23:23:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:43.114 23:23:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:43.114 23:23:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:43.114 23:23:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:43.114 23:23:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:43.114 23:23:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:43.114 23:23:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@104 -- # continue 2 00:24:43.114 23:23:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@104 -- # continue 2 00:24:43.114 23:23:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:43.114 23:23:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.114 23:23:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:43.114 23:23:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:43.114 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:43.114 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:43.114 altname enp217s0f0np0 00:24:43.114 altname ens818f0np0 00:24:43.114 inet 192.168.100.8/24 scope global mlx_0_0 00:24:43.114 valid_lft forever preferred_lft forever 00:24:43.114 23:23:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:43.114 23:23:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.114 23:23:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:43.114 23:23:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:43.114 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:43.114 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:43.114 altname enp217s0f1np1 00:24:43.114 altname ens818f1np1 00:24:43.114 inet 192.168.100.9/24 scope global mlx_0_1 00:24:43.114 valid_lft forever preferred_lft forever 00:24:43.114 23:23:48 -- nvmf/common.sh@410 -- # return 0 00:24:43.114 23:23:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:43.114 23:23:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:43.114 23:23:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:43.114 23:23:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:43.114 23:23:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:43.114 23:23:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:43.114 23:23:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:43.114 23:23:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:43.114 23:23:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:43.114 23:23:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@104 -- # continue 2 00:24:43.114 23:23:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.114 23:23:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:43.114 23:23:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@104 -- # continue 2 00:24:43.114 23:23:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:43.114 23:23:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.114 23:23:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:43.114 23:23:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.114 23:23:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.114 23:23:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:43.114 192.168.100.9' 00:24:43.114 23:23:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:43.114 192.168.100.9' 00:24:43.114 23:23:48 -- nvmf/common.sh@445 -- # head -n 1 00:24:43.114 23:23:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:43.114 23:23:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:43.114 192.168.100.9' 00:24:43.114 23:23:48 -- nvmf/common.sh@446 -- # tail -n +2 00:24:43.114 23:23:48 -- nvmf/common.sh@446 -- # head -n 1 00:24:43.114 23:23:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:43.114 23:23:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:43.115 23:23:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:43.115 23:23:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:43.115 23:23:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:43.115 23:23:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:43.115 23:23:48 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:43.115 23:23:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:43.115 23:23:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:43.115 23:23:48 -- common/autotest_common.sh@10 -- # set +x 00:24:43.115 23:23:48 -- nvmf/common.sh@469 -- # nvmfpid=721588 00:24:43.115 23:23:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.115 23:23:48 -- nvmf/common.sh@470 -- # waitforlisten 721588 00:24:43.115 23:23:48 -- common/autotest_common.sh@819 -- # '[' -z 721588 ']' 00:24:43.115 23:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.115 23:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:43.115 23:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.115 23:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:43.115 23:23:48 -- common/autotest_common.sh@10 -- # set +x 00:24:43.115 [2024-11-02 23:23:48.674944] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:43.115 [2024-11-02 23:23:48.675005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.115 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.115 [2024-11-02 23:23:48.750459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.115 [2024-11-02 23:23:48.841170] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:43.115 [2024-11-02 23:23:48.841322] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.115 [2024-11-02 23:23:48.841346] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.115 [2024-11-02 23:23:48.841359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.115 [2024-11-02 23:23:48.841414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.115 [2024-11-02 23:23:48.841507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.115 [2024-11-02 23:23:48.841594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.115 [2024-11-02 23:23:48.841598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.094 23:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:44.094 23:23:49 -- common/autotest_common.sh@852 -- # return 0 00:24:44.094 23:23:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:44.094 23:23:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:44.094 23:23:49 -- common/autotest_common.sh@10 -- # set +x 00:24:44.094 23:23:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.094 23:23:49 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:44.094 23:23:49 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:47.387 23:23:52 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:47.387 23:23:52 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:47.387 23:23:52 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:47.387 23:23:52 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:47.387 23:23:53 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:47.387 23:23:53 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:47.387 23:23:53 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:47.387 23:23:53 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:24:47.387 23:23:53 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:24:47.645 [2024-11-02 23:23:53.230228] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:24:47.645 [2024-11-02 23:23:53.251164] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22ab430/0x22b8fc0) succeed. 00:24:47.645 [2024-11-02 23:23:53.260577] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22aca20/0x22fa660) succeed. 00:24:47.645 23:23:53 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.904 23:23:53 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:47.904 23:23:53 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.163 23:23:53 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:48.163 23:23:53 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:48.422 23:23:53 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:48.422 [2024-11-02 23:23:54.126194] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:48.422 23:23:54 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:48.680 23:23:54 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:24:48.680 23:23:54 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:48.680 23:23:54 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:48.680 23:23:54 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:50.057 Initializing NVMe Controllers 00:24:50.057 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:24:50.057 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:24:50.057 Initialization complete. Launching workers. 00:24:50.057 ======================================================== 00:24:50.057 Latency(us) 00:24:50.057 Device Information : IOPS MiB/s Average min max 00:24:50.057 PCIE (0000:d8:00.0) NSID 1 from core 0: 103543.58 404.47 308.64 34.18 5208.77 00:24:50.057 ======================================================== 00:24:50.057 Total : 103543.58 404.47 308.64 34.18 5208.77 00:24:50.057 00:24:50.057 23:23:55 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:50.057 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.345 Initializing NVMe Controllers 00:24:53.345 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.345 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:53.345 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:53.345 Initialization complete. Launching workers. 00:24:53.345 ======================================================== 00:24:53.345 Latency(us) 00:24:53.345 Device Information : IOPS MiB/s Average min max 00:24:53.345 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6827.99 26.67 146.26 43.18 5047.82 00:24:53.345 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5301.99 20.71 188.42 65.88 5014.52 00:24:53.345 ======================================================== 00:24:53.345 Total : 12129.98 47.38 164.69 43.18 5047.82 00:24:53.345 00:24:53.345 23:23:58 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:53.345 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.633 Initializing NVMe Controllers 00:24:56.633 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.633 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.633 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:56.633 Initialization complete. Launching workers. 00:24:56.633 ======================================================== 00:24:56.633 Latency(us) 00:24:56.633 Device Information : IOPS MiB/s Average min max 00:24:56.633 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19344.85 75.57 1653.75 453.80 6068.49 00:24:56.633 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4049.11 15.82 7963.49 6890.66 9018.30 00:24:56.633 ======================================================== 00:24:56.633 Total : 23393.96 91.38 2745.86 453.80 9018.30 00:24:56.633 00:24:56.892 23:24:02 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:24:56.892 23:24:02 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:56.892 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.083 Initializing NVMe Controllers 00:25:01.083 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.083 Controller IO queue size 128, less than required. 00:25:01.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:01.083 Controller IO queue size 128, less than required. 00:25:01.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:01.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:01.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:01.083 Initialization complete. Launching workers. 00:25:01.083 ======================================================== 00:25:01.083 Latency(us) 00:25:01.083 Device Information : IOPS MiB/s Average min max 00:25:01.083 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4088.50 1022.12 31380.68 14591.85 65816.97 00:25:01.083 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4143.00 1035.75 30699.96 13447.59 53446.23 00:25:01.083 ======================================================== 00:25:01.083 Total : 8231.50 2057.88 31038.06 13447.59 65816.97 00:25:01.083 00:25:01.083 23:24:06 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:01.083 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.343 No valid NVMe controllers or AIO or URING devices found 00:25:01.603 Initializing NVMe Controllers 00:25:01.603 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.603 Controller IO queue size 128, less than required. 00:25:01.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:01.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:01.603 Controller IO queue size 128, less than required. 00:25:01.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:01.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:01.603 WARNING: Some requested NVMe devices were skipped 00:25:01.603 23:24:07 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:01.603 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.797 Initializing NVMe Controllers 00:25:05.797 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.797 Controller IO queue size 128, less than required. 00:25:05.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.797 Controller IO queue size 128, less than required. 00:25:05.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.797 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.797 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:05.797 Initialization complete. Launching workers. 00:25:05.797 00:25:05.797 ==================== 00:25:05.797 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:05.797 RDMA transport: 00:25:05.797 dev name: mlx5_0 00:25:05.797 polls: 422427 00:25:05.797 idle_polls: 418262 00:25:05.797 completions: 46177 00:25:05.797 queued_requests: 1 00:25:05.797 total_send_wrs: 23152 00:25:05.797 send_doorbell_updates: 3971 00:25:05.797 total_recv_wrs: 23152 00:25:05.797 recv_doorbell_updates: 3971 00:25:05.797 --------------------------------- 00:25:05.797 00:25:05.797 ==================== 00:25:05.797 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:05.797 RDMA transport: 00:25:05.797 dev name: mlx5_0 00:25:05.797 polls: 424456 00:25:05.797 idle_polls: 424176 00:25:05.797 completions: 20506 00:25:05.797 queued_requests: 1 00:25:05.797 total_send_wrs: 10337 00:25:05.797 send_doorbell_updates: 262 00:25:05.797 total_recv_wrs: 10337 00:25:05.797 recv_doorbell_updates: 263 00:25:05.797 --------------------------------- 00:25:05.797 ======================================================== 00:25:05.797 Latency(us) 00:25:05.797 Device Information : IOPS MiB/s Average min max 00:25:05.797 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5819.09 1454.77 22049.27 9213.25 55264.99 00:25:05.797 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2615.59 653.90 49216.67 29195.79 76931.12 00:25:05.797 ======================================================== 00:25:05.797 Total : 8434.68 2108.67 30473.87 9213.25 76931.12 00:25:05.797 00:25:05.797 23:24:11 -- host/perf.sh@66 -- # sync 00:25:05.797 23:24:11 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.056 23:24:11 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:06.056 23:24:11 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:06.056 23:24:11 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:12.627 23:24:17 -- host/perf.sh@72 -- # ls_guid=ad51e514-8572-4ef1-b11c-c48ef87f8d8d 00:25:12.627 23:24:17 -- host/perf.sh@73 -- # get_lvs_free_mb ad51e514-8572-4ef1-b11c-c48ef87f8d8d 00:25:12.627 23:24:17 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ad51e514-8572-4ef1-b11c-c48ef87f8d8d 00:25:12.627 23:24:17 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:12.627 23:24:17 -- common/autotest_common.sh@1345 -- # local fc 00:25:12.627 23:24:17 -- common/autotest_common.sh@1346 -- # local cs 00:25:12.627 23:24:17 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:12.627 23:24:17 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:12.627 { 00:25:12.627 "uuid": "ad51e514-8572-4ef1-b11c-c48ef87f8d8d", 00:25:12.627 "name": "lvs_0", 00:25:12.627 "base_bdev": "Nvme0n1", 00:25:12.627 "total_data_clusters": 476466, 00:25:12.627 "free_clusters": 476466, 00:25:12.627 "block_size": 512, 00:25:12.627 "cluster_size": 4194304 00:25:12.627 } 00:25:12.627 ]' 00:25:12.627 23:24:17 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ad51e514-8572-4ef1-b11c-c48ef87f8d8d") .free_clusters' 00:25:12.628 23:24:17 -- common/autotest_common.sh@1348 -- # fc=476466 00:25:12.628 23:24:17 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ad51e514-8572-4ef1-b11c-c48ef87f8d8d") .cluster_size' 00:25:12.628 23:24:17 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:12.628 23:24:17 -- common/autotest_common.sh@1352 -- # free_mb=1905864 00:25:12.628 23:24:17 -- common/autotest_common.sh@1353 -- # echo 1905864 00:25:12.628 1905864 00:25:12.628 23:24:17 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:12.628 23:24:17 -- host/perf.sh@78 -- # free_mb=20480 00:25:12.628 23:24:17 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ad51e514-8572-4ef1-b11c-c48ef87f8d8d lbd_0 20480 00:25:12.887 23:24:18 -- host/perf.sh@80 -- # lb_guid=a84ade09-339a-43d5-9937-4b37076b3db5 00:25:12.887 23:24:18 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore a84ade09-339a-43d5-9937-4b37076b3db5 lvs_n_0 00:25:14.793 23:24:20 -- host/perf.sh@83 -- # ls_nested_guid=d9fa6ce6-f067-40de-adad-2382cb1c29e9 00:25:14.793 23:24:20 -- host/perf.sh@84 -- # get_lvs_free_mb d9fa6ce6-f067-40de-adad-2382cb1c29e9 00:25:14.793 23:24:20 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d9fa6ce6-f067-40de-adad-2382cb1c29e9 00:25:14.793 23:24:20 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:14.793 23:24:20 -- common/autotest_common.sh@1345 -- # local fc 00:25:14.793 23:24:20 -- common/autotest_common.sh@1346 -- # local cs 00:25:14.793 23:24:20 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:15.052 23:24:20 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:15.052 { 00:25:15.052 "uuid": "ad51e514-8572-4ef1-b11c-c48ef87f8d8d", 00:25:15.052 "name": "lvs_0", 00:25:15.052 "base_bdev": "Nvme0n1", 00:25:15.052 "total_data_clusters": 476466, 00:25:15.052 "free_clusters": 471346, 00:25:15.052 "block_size": 512, 00:25:15.052 "cluster_size": 4194304 00:25:15.052 }, 00:25:15.052 { 00:25:15.052 "uuid": "d9fa6ce6-f067-40de-adad-2382cb1c29e9", 00:25:15.052 "name": "lvs_n_0", 00:25:15.052 "base_bdev": "a84ade09-339a-43d5-9937-4b37076b3db5", 00:25:15.052 "total_data_clusters": 5114, 00:25:15.052 "free_clusters": 5114, 00:25:15.052 "block_size": 512, 00:25:15.052 "cluster_size": 4194304 00:25:15.052 } 00:25:15.052 ]' 00:25:15.052 23:24:20 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d9fa6ce6-f067-40de-adad-2382cb1c29e9") .free_clusters' 00:25:15.052 23:24:20 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:15.052 23:24:20 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d9fa6ce6-f067-40de-adad-2382cb1c29e9") .cluster_size' 00:25:15.052 23:24:20 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:15.052 23:24:20 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:15.052 23:24:20 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:15.052 20456 00:25:15.052 23:24:20 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:15.052 23:24:20 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d9fa6ce6-f067-40de-adad-2382cb1c29e9 lbd_nest_0 20456 00:25:15.310 23:24:20 -- host/perf.sh@88 -- # lb_nested_guid=9d7f4b6a-5cd7-463d-b847-f688a0ee68ce 00:25:15.310 23:24:20 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.570 23:24:21 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:15.570 23:24:21 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9d7f4b6a-5cd7-463d-b847-f688a0ee68ce 00:25:15.570 23:24:21 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:15.828 23:24:21 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:15.828 23:24:21 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:15.828 23:24:21 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:15.828 23:24:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:15.828 23:24:21 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:15.828 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.048 Initializing NVMe Controllers 00:25:28.048 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.048 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:28.048 Initialization complete. Launching workers. 00:25:28.048 ======================================================== 00:25:28.048 Latency(us) 00:25:28.048 Device Information : IOPS MiB/s Average min max 00:25:28.048 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5944.95 2.90 167.80 67.29 8057.83 00:25:28.048 ======================================================== 00:25:28.048 Total : 5944.95 2.90 167.80 67.29 8057.83 00:25:28.048 00:25:28.048 23:24:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:28.048 23:24:32 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:28.048 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.287 Initializing NVMe Controllers 00:25:40.287 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.287 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:40.287 Initialization complete. Launching workers. 00:25:40.287 ======================================================== 00:25:40.287 Latency(us) 00:25:40.287 Device Information : IOPS MiB/s Average min max 00:25:40.287 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2680.74 335.09 372.83 155.02 8141.66 00:25:40.287 ======================================================== 00:25:40.287 Total : 2680.74 335.09 372.83 155.02 8141.66 00:25:40.287 00:25:40.287 23:24:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:40.287 23:24:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:40.287 23:24:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:40.287 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.267 Initializing NVMe Controllers 00:25:50.267 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:50.267 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:50.267 Initialization complete. Launching workers. 00:25:50.267 ======================================================== 00:25:50.267 Latency(us) 00:25:50.267 Device Information : IOPS MiB/s Average min max 00:25:50.267 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12325.11 6.02 2596.36 820.19 8594.44 00:25:50.267 ======================================================== 00:25:50.267 Total : 12325.11 6.02 2596.36 820.19 8594.44 00:25:50.267 00:25:50.267 23:24:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:50.267 23:24:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:50.267 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.473 Initializing NVMe Controllers 00:26:02.473 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.473 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:02.473 Initialization complete. Launching workers. 00:26:02.473 ======================================================== 00:26:02.473 Latency(us) 00:26:02.473 Device Information : IOPS MiB/s Average min max 00:26:02.473 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3991.42 498.93 8023.23 3910.12 16042.87 00:26:02.473 ======================================================== 00:26:02.473 Total : 3991.42 498.93 8023.23 3910.12 16042.87 00:26:02.473 00:26:02.473 23:25:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:02.473 23:25:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:02.473 23:25:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:02.473 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.687 Initializing NVMe Controllers 00:26:14.687 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.687 Controller IO queue size 128, less than required. 00:26:14.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.687 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.687 Initialization complete. Launching workers. 00:26:14.687 ======================================================== 00:26:14.687 Latency(us) 00:26:14.687 Device Information : IOPS MiB/s Average min max 00:26:14.687 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19711.12 9.62 6495.75 1907.39 14901.53 00:26:14.687 ======================================================== 00:26:14.687 Total : 19711.12 9.62 6495.75 1907.39 14901.53 00:26:14.687 00:26:14.687 23:25:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:14.687 23:25:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:14.687 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.671 Initializing NVMe Controllers 00:26:24.671 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.671 Controller IO queue size 128, less than required. 00:26:24.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:24.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:24.671 Initialization complete. Launching workers. 00:26:24.671 ======================================================== 00:26:24.671 Latency(us) 00:26:24.672 Device Information : IOPS MiB/s Average min max 00:26:24.672 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11464.80 1433.10 11165.69 3354.63 23159.77 00:26:24.672 ======================================================== 00:26:24.672 Total : 11464.80 1433.10 11165.69 3354.63 23159.77 00:26:24.672 00:26:24.672 23:25:29 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.672 23:25:29 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9d7f4b6a-5cd7-463d-b847-f688a0ee68ce 00:26:24.931 23:25:30 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:24.931 23:25:30 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a84ade09-339a-43d5-9937-4b37076b3db5 00:26:25.190 23:25:30 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:25.449 23:25:31 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:25.449 23:25:31 -- host/perf.sh@114 -- # nvmftestfini 00:26:25.449 23:25:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:25.449 23:25:31 -- nvmf/common.sh@116 -- # sync 00:26:25.449 23:25:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:26:25.449 23:25:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:26:25.449 23:25:31 -- nvmf/common.sh@119 -- # set +e 00:26:25.449 23:25:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:25.449 23:25:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:26:25.449 rmmod nvme_rdma 00:26:25.449 rmmod nvme_fabrics 00:26:25.449 23:25:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:25.449 23:25:31 -- nvmf/common.sh@123 -- # set -e 00:26:25.449 23:25:31 -- nvmf/common.sh@124 -- # return 0 00:26:25.449 23:25:31 -- nvmf/common.sh@477 -- # '[' -n 721588 ']' 00:26:25.449 23:25:31 -- nvmf/common.sh@478 -- # killprocess 721588 00:26:25.449 23:25:31 -- common/autotest_common.sh@926 -- # '[' -z 721588 ']' 00:26:25.449 23:25:31 -- common/autotest_common.sh@930 -- # kill -0 721588 00:26:25.449 23:25:31 -- common/autotest_common.sh@931 -- # uname 00:26:25.449 23:25:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:25.449 23:25:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 721588 00:26:25.708 23:25:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:25.708 23:25:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:25.708 23:25:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 721588' 00:26:25.708 killing process with pid 721588 00:26:25.708 23:25:31 -- common/autotest_common.sh@945 -- # kill 721588 00:26:25.708 23:25:31 -- common/autotest_common.sh@950 -- # wait 721588 00:26:28.244 23:25:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:28.244 23:25:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:26:28.244 00:26:28.244 real 1m51.763s 00:26:28.244 user 7m3.556s 00:26:28.244 sys 0m6.820s 00:26:28.244 23:25:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.244 23:25:33 -- common/autotest_common.sh@10 -- # set +x 00:26:28.244 ************************************ 00:26:28.244 END TEST nvmf_perf 00:26:28.244 ************************************ 00:26:28.244 23:25:33 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:28.244 23:25:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:28.244 23:25:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.244 23:25:33 -- common/autotest_common.sh@10 -- # set +x 00:26:28.244 ************************************ 00:26:28.244 START TEST nvmf_fio_host 00:26:28.244 ************************************ 00:26:28.244 23:25:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:28.244 * Looking for test storage... 00:26:28.244 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:28.244 23:25:33 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:28.244 23:25:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.244 23:25:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.244 23:25:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.244 23:25:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.244 23:25:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.245 23:25:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.245 23:25:33 -- paths/export.sh@5 -- # export PATH 00:26:28.245 23:25:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.245 23:25:33 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.245 23:25:33 -- nvmf/common.sh@7 -- # uname -s 00:26:28.245 23:25:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.245 23:25:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.245 23:25:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.245 23:25:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.245 23:25:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.245 23:25:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.245 23:25:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.245 23:25:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.245 23:25:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.245 23:25:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.245 23:25:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:28.245 23:25:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:28.245 23:25:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.245 23:25:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.245 23:25:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.245 23:25:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:28.245 23:25:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.245 23:25:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.245 23:25:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.245 23:25:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.245 23:25:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.245 23:25:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.245 23:25:33 -- paths/export.sh@5 -- # export PATH 00:26:28.245 23:25:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.245 23:25:33 -- nvmf/common.sh@46 -- # : 0 00:26:28.245 23:25:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:28.245 23:25:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:28.245 23:25:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:28.245 23:25:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.245 23:25:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.245 23:25:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:28.245 23:25:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:28.245 23:25:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:28.245 23:25:33 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:28.245 23:25:33 -- host/fio.sh@14 -- # nvmftestinit 00:26:28.245 23:25:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:26:28.245 23:25:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.245 23:25:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:28.245 23:25:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:28.245 23:25:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:28.245 23:25:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.245 23:25:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.245 23:25:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.245 23:25:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:28.245 23:25:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:28.245 23:25:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:28.245 23:25:33 -- common/autotest_common.sh@10 -- # set +x 00:26:34.821 23:25:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:34.821 23:25:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:34.821 23:25:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:34.821 23:25:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:34.821 23:25:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:34.821 23:25:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:34.821 23:25:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:34.821 23:25:40 -- nvmf/common.sh@294 -- # net_devs=() 00:26:34.821 23:25:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:34.821 23:25:40 -- nvmf/common.sh@295 -- # e810=() 00:26:34.821 23:25:40 -- nvmf/common.sh@295 -- # local -ga e810 00:26:34.821 23:25:40 -- nvmf/common.sh@296 -- # x722=() 00:26:34.821 23:25:40 -- nvmf/common.sh@296 -- # local -ga x722 00:26:34.821 23:25:40 -- nvmf/common.sh@297 -- # mlx=() 00:26:34.821 23:25:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:34.821 23:25:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.821 23:25:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:34.821 23:25:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:26:34.821 23:25:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:26:34.821 23:25:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:26:34.821 23:25:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:34.821 23:25:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:34.821 23:25:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:34.821 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:34.821 23:25:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:34.821 23:25:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:34.821 23:25:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:34.821 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:34.821 23:25:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:34.821 23:25:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:34.821 23:25:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:34.821 23:25:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.821 23:25:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:34.821 23:25:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.821 23:25:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:34.821 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:34.821 23:25:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.821 23:25:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:34.821 23:25:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.821 23:25:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:34.821 23:25:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.821 23:25:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:34.821 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:34.821 23:25:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.821 23:25:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:34.821 23:25:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:34.821 23:25:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:26:34.821 23:25:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:26:34.821 23:25:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:26:34.821 23:25:40 -- nvmf/common.sh@57 -- # uname 00:26:34.821 23:25:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:26:34.821 23:25:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:26:34.821 23:25:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:26:34.821 23:25:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:26:34.821 23:25:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:26:34.821 23:25:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:26:34.821 23:25:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:26:34.821 23:25:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:26:34.821 23:25:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:26:34.821 23:25:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:34.821 23:25:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:26:34.821 23:25:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:34.821 23:25:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:34.821 23:25:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:34.821 23:25:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:34.821 23:25:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:34.821 23:25:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:34.821 23:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.821 23:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@104 -- # continue 2 00:26:34.822 23:25:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@104 -- # continue 2 00:26:34.822 23:25:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:34.822 23:25:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:34.822 23:25:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:26:34.822 23:25:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:26:34.822 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:34.822 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:34.822 altname enp217s0f0np0 00:26:34.822 altname ens818f0np0 00:26:34.822 inet 192.168.100.8/24 scope global mlx_0_0 00:26:34.822 valid_lft forever preferred_lft forever 00:26:34.822 23:25:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:34.822 23:25:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:34.822 23:25:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:26:34.822 23:25:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:26:34.822 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:34.822 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:34.822 altname enp217s0f1np1 00:26:34.822 altname ens818f1np1 00:26:34.822 inet 192.168.100.9/24 scope global mlx_0_1 00:26:34.822 valid_lft forever preferred_lft forever 00:26:34.822 23:25:40 -- nvmf/common.sh@410 -- # return 0 00:26:34.822 23:25:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:34.822 23:25:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:34.822 23:25:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:26:34.822 23:25:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:26:34.822 23:25:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:34.822 23:25:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:34.822 23:25:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:34.822 23:25:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:34.822 23:25:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:34.822 23:25:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@104 -- # continue 2 00:26:34.822 23:25:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.822 23:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:34.822 23:25:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@104 -- # continue 2 00:26:34.822 23:25:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:34.822 23:25:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:34.822 23:25:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:34.822 23:25:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:34.822 23:25:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:34.822 23:25:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:26:34.822 192.168.100.9' 00:26:34.822 23:25:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:26:34.822 192.168.100.9' 00:26:34.822 23:25:40 -- nvmf/common.sh@445 -- # head -n 1 00:26:34.822 23:25:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:34.822 23:25:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:26:34.822 192.168.100.9' 00:26:34.822 23:25:40 -- nvmf/common.sh@446 -- # tail -n +2 00:26:34.822 23:25:40 -- nvmf/common.sh@446 -- # head -n 1 00:26:34.822 23:25:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:34.822 23:25:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:26:34.822 23:25:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:34.822 23:25:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:26:34.822 23:25:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:26:34.822 23:25:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:26:34.822 23:25:40 -- host/fio.sh@16 -- # [[ y != y ]] 00:26:34.822 23:25:40 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:34.822 23:25:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:34.822 23:25:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.822 23:25:40 -- host/fio.sh@24 -- # nvmfpid=743135 00:26:34.822 23:25:40 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:34.822 23:25:40 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:34.822 23:25:40 -- host/fio.sh@28 -- # waitforlisten 743135 00:26:34.822 23:25:40 -- common/autotest_common.sh@819 -- # '[' -z 743135 ']' 00:26:34.822 23:25:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.822 23:25:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:34.822 23:25:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.822 23:25:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:34.822 23:25:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.082 [2024-11-02 23:25:40.583179] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:35.082 [2024-11-02 23:25:40.583230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.082 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.082 [2024-11-02 23:25:40.653172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.082 [2024-11-02 23:25:40.728458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:35.082 [2024-11-02 23:25:40.728564] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.082 [2024-11-02 23:25:40.728574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.082 [2024-11-02 23:25:40.728584] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.082 [2024-11-02 23:25:40.728629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.082 [2024-11-02 23:25:40.728741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.082 [2024-11-02 23:25:40.728838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.082 [2024-11-02 23:25:40.728840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.019 23:25:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:36.019 23:25:41 -- common/autotest_common.sh@852 -- # return 0 00:26:36.019 23:25:41 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:36.019 [2024-11-02 23:25:41.594163] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9ef090/0x9f3580) succeed. 00:26:36.019 [2024-11-02 23:25:41.603534] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9f0680/0xa34c20) succeed. 00:26:36.019 23:25:41 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:36.019 23:25:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:36.019 23:25:41 -- common/autotest_common.sh@10 -- # set +x 00:26:36.279 23:25:41 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:36.279 Malloc1 00:26:36.279 23:25:41 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.539 23:25:42 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:36.798 23:25:42 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:36.798 [2024-11-02 23:25:42.532431] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:37.057 23:25:42 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:37.057 23:25:42 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:26:37.057 23:25:42 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:37.057 23:25:42 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:37.057 23:25:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:37.057 23:25:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:37.057 23:25:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:37.057 23:25:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:37.057 23:25:42 -- common/autotest_common.sh@1320 -- # shift 00:26:37.057 23:25:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:37.057 23:25:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:37.057 23:25:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:37.057 23:25:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:37.057 23:25:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:37.057 23:25:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:37.057 23:25:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:37.057 23:25:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:37.623 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:37.623 fio-3.35 00:26:37.623 Starting 1 thread 00:26:37.623 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.651 00:26:39.651 test: (groupid=0, jobs=1): err= 0: pid=743620: Sat Nov 2 23:25:45 2024 00:26:39.651 read: IOPS=19.1k, BW=74.7MiB/s (78.3MB/s)(150MiB/2003msec) 00:26:39.651 slat (nsec): min=1332, max=25783, avg=1489.12, stdev=447.85 00:26:39.651 clat (usec): min=1927, max=6019, avg=3325.95, stdev=80.40 00:26:39.651 lat (usec): min=1953, max=6021, avg=3327.44, stdev=80.35 00:26:39.651 clat percentiles (usec): 00:26:39.651 | 1.00th=[ 3294], 5.00th=[ 3294], 10.00th=[ 3294], 20.00th=[ 3326], 00:26:39.651 | 30.00th=[ 3326], 40.00th=[ 3326], 50.00th=[ 3326], 60.00th=[ 3326], 00:26:39.651 | 70.00th=[ 3326], 80.00th=[ 3326], 90.00th=[ 3359], 95.00th=[ 3359], 00:26:39.651 | 99.00th=[ 3392], 99.50th=[ 3458], 99.90th=[ 4293], 99.95th=[ 5211], 00:26:39.651 | 99.99th=[ 5997] 00:26:39.651 bw ( KiB/s): min=74824, max=77208, per=99.97%, avg=76456.00, stdev=1099.14, samples=4 00:26:39.651 iops : min=18706, max=19302, avg=19114.00, stdev=274.78, samples=4 00:26:39.651 write: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(149MiB/2003msec); 0 zone resets 00:26:39.651 slat (nsec): min=1367, max=17623, avg=1578.33, stdev=511.95 00:26:39.651 clat (usec): min=1969, max=6007, avg=3324.27, stdev=73.89 00:26:39.651 lat (usec): min=1980, max=6009, avg=3325.85, stdev=73.84 00:26:39.651 clat percentiles (usec): 00:26:39.651 | 1.00th=[ 3294], 5.00th=[ 3294], 10.00th=[ 3294], 20.00th=[ 3326], 00:26:39.651 | 30.00th=[ 3326], 40.00th=[ 3326], 50.00th=[ 3326], 60.00th=[ 3326], 00:26:39.651 | 70.00th=[ 3326], 80.00th=[ 3326], 90.00th=[ 3359], 95.00th=[ 3359], 00:26:39.651 | 99.00th=[ 3392], 99.50th=[ 3458], 99.90th=[ 4293], 99.95th=[ 5080], 00:26:39.651 | 99.99th=[ 5997] 00:26:39.651 bw ( KiB/s): min=74752, max=77280, per=100.00%, avg=76396.00, stdev=1144.52, samples=4 00:26:39.651 iops : min=18688, max=19320, avg=19099.00, stdev=286.13, samples=4 00:26:39.651 lat (msec) : 2=0.01%, 4=99.86%, 10=0.13% 00:26:39.651 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=2 00:26:39.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:39.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:39.651 issued rwts: total=38296,38257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:39.651 00:26:39.651 Run status group 0 (all jobs): 00:26:39.651 READ: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=150MiB (157MB), run=2003-2003msec 00:26:39.651 WRITE: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=149MiB (157MB), run=2003-2003msec 00:26:39.910 23:25:45 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:39.910 23:25:45 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:39.910 23:25:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:39.910 23:25:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:39.910 23:25:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:39.910 23:25:45 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:39.910 23:25:45 -- common/autotest_common.sh@1320 -- # shift 00:26:39.910 23:25:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:39.910 23:25:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:39.910 23:25:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:39.910 23:25:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:39.910 23:25:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:39.910 23:25:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:39.910 23:25:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:39.910 23:25:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:40.169 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:40.169 fio-3.35 00:26:40.169 Starting 1 thread 00:26:40.169 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.708 00:26:42.708 test: (groupid=0, jobs=1): err= 0: pid=744245: Sat Nov 2 23:25:48 2024 00:26:42.708 read: IOPS=15.2k, BW=237MiB/s (248MB/s)(464MiB/1959msec) 00:26:42.708 slat (nsec): min=2219, max=50661, avg=2602.97, stdev=1061.75 00:26:42.708 clat (usec): min=280, max=9147, avg=1634.99, stdev=1357.47 00:26:42.708 lat (usec): min=283, max=9152, avg=1637.59, stdev=1357.91 00:26:42.708 clat percentiles (usec): 00:26:42.708 | 1.00th=[ 644], 5.00th=[ 742], 10.00th=[ 791], 20.00th=[ 873], 00:26:42.708 | 30.00th=[ 947], 40.00th=[ 1029], 50.00th=[ 1139], 60.00th=[ 1237], 00:26:42.708 | 70.00th=[ 1369], 80.00th=[ 1614], 90.00th=[ 4555], 95.00th=[ 4686], 00:26:42.708 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 8029], 99.95th=[ 8717], 00:26:42.708 | 99.99th=[ 9110] 00:26:42.708 bw ( KiB/s): min=103808, max=123680, per=47.94%, avg=116264.00, stdev=8625.46, samples=4 00:26:42.708 iops : min= 6488, max= 7730, avg=7266.50, stdev=539.09, samples=4 00:26:42.708 write: IOPS=8656, BW=135MiB/s (142MB/s)(237MiB/1752msec); 0 zone resets 00:26:42.708 slat (usec): min=26, max=125, avg=28.94, stdev= 5.37 00:26:42.708 clat (usec): min=4037, max=18462, avg=11817.51, stdev=1644.82 00:26:42.708 lat (usec): min=4066, max=18489, avg=11846.45, stdev=1644.42 00:26:42.708 clat percentiles (usec): 00:26:42.708 | 1.00th=[ 6718], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10552], 00:26:42.708 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:26:42.708 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13829], 95.00th=[14615], 00:26:42.708 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17433], 99.95th=[17695], 00:26:42.708 | 99.99th=[18482] 00:26:42.708 bw ( KiB/s): min=110464, max=128768, per=87.22%, avg=120800.00, stdev=7731.34, samples=4 00:26:42.708 iops : min= 6904, max= 8048, avg=7550.00, stdev=483.21, samples=4 00:26:42.708 lat (usec) : 500=0.02%, 750=3.82%, 1000=20.52% 00:26:42.708 lat (msec) : 2=31.75%, 4=2.14%, 10=11.41%, 20=30.33% 00:26:42.708 cpu : usr=96.46%, sys=1.60%, ctx=225, majf=0, minf=1 00:26:42.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:42.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:42.708 issued rwts: total=29694,15166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:42.708 00:26:42.708 Run status group 0 (all jobs): 00:26:42.708 READ: bw=237MiB/s (248MB/s), 237MiB/s-237MiB/s (248MB/s-248MB/s), io=464MiB (487MB), run=1959-1959msec 00:26:42.708 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=237MiB (248MB), run=1752-1752msec 00:26:42.708 23:25:48 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.708 23:25:48 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:42.708 23:25:48 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:42.708 23:25:48 -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:42.708 23:25:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:42.708 23:25:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:26:42.708 23:25:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:42.708 23:25:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:42.709 23:25:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:42.968 23:25:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:26:42.968 23:25:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:26:42.968 23:25:48 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:26:46.261 Nvme0n1 00:26:46.261 23:25:51 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:51.538 23:25:56 -- host/fio.sh@53 -- # ls_guid=c0cf4a7c-bcaf-4f9a-8677-01e88a39332f 00:26:51.538 23:25:56 -- host/fio.sh@54 -- # get_lvs_free_mb c0cf4a7c-bcaf-4f9a-8677-01e88a39332f 00:26:51.538 23:25:56 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c0cf4a7c-bcaf-4f9a-8677-01e88a39332f 00:26:51.538 23:25:56 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:51.538 23:25:56 -- common/autotest_common.sh@1345 -- # local fc 00:26:51.538 23:25:56 -- common/autotest_common.sh@1346 -- # local cs 00:26:51.538 23:25:57 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:51.538 23:25:57 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:51.538 { 00:26:51.538 "uuid": "c0cf4a7c-bcaf-4f9a-8677-01e88a39332f", 00:26:51.538 "name": "lvs_0", 00:26:51.538 "base_bdev": "Nvme0n1", 00:26:51.538 "total_data_clusters": 1862, 00:26:51.538 "free_clusters": 1862, 00:26:51.538 "block_size": 512, 00:26:51.538 "cluster_size": 1073741824 00:26:51.538 } 00:26:51.538 ]' 00:26:51.538 23:25:57 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c0cf4a7c-bcaf-4f9a-8677-01e88a39332f") .free_clusters' 00:26:51.538 23:25:57 -- common/autotest_common.sh@1348 -- # fc=1862 00:26:51.538 23:25:57 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c0cf4a7c-bcaf-4f9a-8677-01e88a39332f") .cluster_size' 00:26:51.538 23:25:57 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:26:51.538 23:25:57 -- common/autotest_common.sh@1352 -- # free_mb=1906688 00:26:51.538 23:25:57 -- common/autotest_common.sh@1353 -- # echo 1906688 00:26:51.538 1906688 00:26:51.538 23:25:57 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:26:52.106 abbe62bc-aa35-488a-809a-8e93d5cfa236 00:26:52.106 23:25:57 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:26:52.366 23:25:57 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:26:52.625 23:25:58 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:52.625 23:25:58 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:52.625 23:25:58 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:52.625 23:25:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:52.625 23:25:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:52.625 23:25:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:52.625 23:25:58 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.625 23:25:58 -- common/autotest_common.sh@1320 -- # shift 00:26:52.625 23:25:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:52.625 23:25:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.625 23:25:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.625 23:25:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:52.625 23:25:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:52.915 23:25:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:52.915 23:25:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:52.915 23:25:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.915 23:25:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.915 23:25:58 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:52.915 23:25:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:52.915 23:25:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:52.915 23:25:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:52.915 23:25:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:52.915 23:25:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:53.178 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:53.178 fio-3.35 00:26:53.178 Starting 1 thread 00:26:53.178 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.716 00:26:55.716 test: (groupid=0, jobs=1): err= 0: pid=746561: Sat Nov 2 23:26:01 2024 00:26:55.716 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(82.0MiB/2005msec) 00:26:55.716 slat (nsec): min=1330, max=17478, avg=1429.67, stdev=281.08 00:26:55.716 clat (usec): min=175, max=332798, avg=6068.79, stdev=18176.85 00:26:55.716 lat (usec): min=176, max=332801, avg=6070.22, stdev=18176.87 00:26:55.716 clat percentiles (msec): 00:26:55.716 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:26:55.716 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:26:55.716 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:26:55.716 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:26:55.716 | 99.99th=[ 334] 00:26:55.716 bw ( KiB/s): min=15712, max=50728, per=99.94%, avg=41852.00, stdev=17428.18, samples=4 00:26:55.716 iops : min= 3928, max=12682, avg=10463.00, stdev=4357.05, samples=4 00:26:55.716 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(82.0MiB/2005msec); 0 zone resets 00:26:55.716 slat (nsec): min=1359, max=17199, avg=1546.72, stdev=247.25 00:26:55.716 clat (usec): min=155, max=333117, avg=6040.57, stdev=17677.17 00:26:55.716 lat (usec): min=156, max=333121, avg=6042.12, stdev=17677.22 00:26:55.716 clat percentiles (msec): 00:26:55.716 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:26:55.716 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:26:55.716 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:26:55.716 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:26:55.716 | 99.99th=[ 334] 00:26:55.716 bw ( KiB/s): min=16528, max=50776, per=99.95%, avg=41858.00, stdev=16890.10, samples=4 00:26:55.716 iops : min= 4132, max=12694, avg=10464.50, stdev=4222.53, samples=4 00:26:55.716 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:26:55.716 lat (msec) : 2=0.04%, 4=0.30%, 10=99.31%, 500=0.30% 00:26:55.716 cpu : usr=99.50%, sys=0.15%, ctx=17, majf=0, minf=2 00:26:55.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:55.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.716 issued rwts: total=20991,20991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.716 00:26:55.716 Run status group 0 (all jobs): 00:26:55.716 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=82.0MiB (86.0MB), run=2005-2005msec 00:26:55.716 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=82.0MiB (86.0MB), run=2005-2005msec 00:26:55.716 23:26:01 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:55.716 23:26:01 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:26:57.093 23:26:02 -- host/fio.sh@64 -- # ls_nested_guid=152b4de6-d801-419f-a717-d351aa1f31ff 00:26:57.093 23:26:02 -- host/fio.sh@65 -- # get_lvs_free_mb 152b4de6-d801-419f-a717-d351aa1f31ff 00:26:57.093 23:26:02 -- common/autotest_common.sh@1343 -- # local lvs_uuid=152b4de6-d801-419f-a717-d351aa1f31ff 00:26:57.093 23:26:02 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:57.093 23:26:02 -- common/autotest_common.sh@1345 -- # local fc 00:26:57.093 23:26:02 -- common/autotest_common.sh@1346 -- # local cs 00:26:57.093 23:26:02 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:57.093 23:26:02 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:57.093 { 00:26:57.093 "uuid": "c0cf4a7c-bcaf-4f9a-8677-01e88a39332f", 00:26:57.093 "name": "lvs_0", 00:26:57.093 "base_bdev": "Nvme0n1", 00:26:57.093 "total_data_clusters": 1862, 00:26:57.093 "free_clusters": 0, 00:26:57.093 "block_size": 512, 00:26:57.093 "cluster_size": 1073741824 00:26:57.093 }, 00:26:57.093 { 00:26:57.093 "uuid": "152b4de6-d801-419f-a717-d351aa1f31ff", 00:26:57.093 "name": "lvs_n_0", 00:26:57.093 "base_bdev": "abbe62bc-aa35-488a-809a-8e93d5cfa236", 00:26:57.093 "total_data_clusters": 476206, 00:26:57.093 "free_clusters": 476206, 00:26:57.093 "block_size": 512, 00:26:57.093 "cluster_size": 4194304 00:26:57.093 } 00:26:57.093 ]' 00:26:57.093 23:26:02 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="152b4de6-d801-419f-a717-d351aa1f31ff") .free_clusters' 00:26:57.093 23:26:02 -- common/autotest_common.sh@1348 -- # fc=476206 00:26:57.093 23:26:02 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="152b4de6-d801-419f-a717-d351aa1f31ff") .cluster_size' 00:26:57.093 23:26:02 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:57.093 23:26:02 -- common/autotest_common.sh@1352 -- # free_mb=1904824 00:26:57.093 23:26:02 -- common/autotest_common.sh@1353 -- # echo 1904824 00:26:57.093 1904824 00:26:57.093 23:26:02 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:26:58.031 a282a0af-3646-4e3f-bca8-e85b1c9bfd50 00:26:58.031 23:26:03 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:26:58.290 23:26:03 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:26:58.290 23:26:04 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:58.548 23:26:04 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:58.548 23:26:04 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:58.548 23:26:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:58.548 23:26:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:58.548 23:26:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:58.548 23:26:04 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:58.548 23:26:04 -- common/autotest_common.sh@1320 -- # shift 00:26:58.548 23:26:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:58.548 23:26:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.548 23:26:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:58.548 23:26:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:58.548 23:26:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:58.548 23:26:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:58.549 23:26:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:58.549 23:26:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.549 23:26:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:58.549 23:26:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:58.549 23:26:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:58.549 23:26:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:58.549 23:26:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:58.549 23:26:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:58.549 23:26:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:58.806 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:58.806 fio-3.35 00:26:58.806 Starting 1 thread 00:26:59.065 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.601 00:27:01.601 test: (groupid=0, jobs=1): err= 0: pid=747769: Sat Nov 2 23:26:06 2024 00:27:01.601 read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(85.0MiB/2005msec) 00:27:01.601 slat (nsec): min=1337, max=111281, avg=1451.42, stdev=807.42 00:27:01.601 clat (usec): min=2977, max=10251, avg=5832.90, stdev=188.48 00:27:01.601 lat (usec): min=2980, max=10252, avg=5834.35, stdev=188.45 00:27:01.601 clat percentiles (usec): 00:27:01.601 | 1.00th=[ 5211], 5.00th=[ 5800], 10.00th=[ 5800], 20.00th=[ 5800], 00:27:01.601 | 30.00th=[ 5800], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5866], 00:27:01.601 | 70.00th=[ 5866], 80.00th=[ 5866], 90.00th=[ 5866], 95.00th=[ 5866], 00:27:01.601 | 99.00th=[ 6456], 99.50th=[ 6521], 99.90th=[ 8029], 99.95th=[ 9503], 00:27:01.601 | 99.99th=[10290] 00:27:01.601 bw ( KiB/s): min=41768, max=44144, per=99.98%, avg=43384.00, stdev=1090.06, samples=4 00:27:01.601 iops : min=10442, max=11036, avg=10846.00, stdev=272.51, samples=4 00:27:01.601 write: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(84.8MiB/2005msec); 0 zone resets 00:27:01.601 slat (nsec): min=1378, max=11960, avg=1540.90, stdev=219.81 00:27:01.601 clat (usec): min=2980, max=10240, avg=5854.27, stdev=198.71 00:27:01.601 lat (usec): min=2983, max=10241, avg=5855.81, stdev=198.69 00:27:01.601 clat percentiles (usec): 00:27:01.601 | 1.00th=[ 5211], 5.00th=[ 5800], 10.00th=[ 5800], 20.00th=[ 5800], 00:27:01.601 | 30.00th=[ 5866], 40.00th=[ 5866], 50.00th=[ 5866], 60.00th=[ 5866], 00:27:01.601 | 70.00th=[ 5866], 80.00th=[ 5866], 90.00th=[ 5866], 95.00th=[ 5932], 00:27:01.601 | 99.00th=[ 6521], 99.50th=[ 6521], 99.90th=[ 8717], 99.95th=[ 9503], 00:27:01.601 | 99.99th=[10159] 00:27:01.601 bw ( KiB/s): min=42184, max=43928, per=99.93%, avg=43276.00, stdev=757.75, samples=4 00:27:01.601 iops : min=10546, max=10982, avg=10819.00, stdev=189.44, samples=4 00:27:01.601 lat (msec) : 4=0.04%, 10=99.93%, 20=0.03% 00:27:01.601 cpu : usr=99.50%, sys=0.10%, ctx=16, majf=0, minf=2 00:27:01.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:01.601 issued rwts: total=21750,21707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:01.601 00:27:01.601 Run status group 0 (all jobs): 00:27:01.601 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=85.0MiB (89.1MB), run=2005-2005msec 00:27:01.602 WRITE: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=84.8MiB (88.9MB), run=2005-2005msec 00:27:01.602 23:26:06 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:01.602 23:26:07 -- host/fio.sh@74 -- # sync 00:27:01.602 23:26:07 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:09.725 23:26:14 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:09.725 23:26:14 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:14.998 23:26:20 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:14.998 23:26:20 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:18.288 23:26:23 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:18.288 23:26:23 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:18.288 23:26:23 -- host/fio.sh@86 -- # nvmftestfini 00:27:18.288 23:26:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:18.288 23:26:23 -- nvmf/common.sh@116 -- # sync 00:27:18.288 23:26:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:18.288 23:26:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:18.288 23:26:23 -- nvmf/common.sh@119 -- # set +e 00:27:18.288 23:26:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:18.288 23:26:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:18.288 rmmod nvme_rdma 00:27:18.288 rmmod nvme_fabrics 00:27:18.288 23:26:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:18.288 23:26:23 -- nvmf/common.sh@123 -- # set -e 00:27:18.288 23:26:23 -- nvmf/common.sh@124 -- # return 0 00:27:18.288 23:26:23 -- nvmf/common.sh@477 -- # '[' -n 743135 ']' 00:27:18.288 23:26:23 -- nvmf/common.sh@478 -- # killprocess 743135 00:27:18.288 23:26:23 -- common/autotest_common.sh@926 -- # '[' -z 743135 ']' 00:27:18.288 23:26:23 -- common/autotest_common.sh@930 -- # kill -0 743135 00:27:18.288 23:26:23 -- common/autotest_common.sh@931 -- # uname 00:27:18.288 23:26:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:18.288 23:26:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 743135 00:27:18.288 23:26:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:18.288 23:26:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:18.288 23:26:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 743135' 00:27:18.288 killing process with pid 743135 00:27:18.288 23:26:23 -- common/autotest_common.sh@945 -- # kill 743135 00:27:18.288 23:26:23 -- common/autotest_common.sh@950 -- # wait 743135 00:27:18.288 23:26:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:18.288 23:26:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:18.288 00:27:18.288 real 0m50.062s 00:27:18.288 user 3m39.094s 00:27:18.288 sys 0m7.518s 00:27:18.288 23:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.288 23:26:23 -- common/autotest_common.sh@10 -- # set +x 00:27:18.288 ************************************ 00:27:18.288 END TEST nvmf_fio_host 00:27:18.288 ************************************ 00:27:18.288 23:26:23 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:18.288 23:26:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:18.288 23:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:18.288 23:26:23 -- common/autotest_common.sh@10 -- # set +x 00:27:18.288 ************************************ 00:27:18.288 START TEST nvmf_failover 00:27:18.288 ************************************ 00:27:18.288 23:26:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:18.288 * Looking for test storage... 00:27:18.548 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:18.548 23:26:24 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.548 23:26:24 -- nvmf/common.sh@7 -- # uname -s 00:27:18.548 23:26:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.548 23:26:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.548 23:26:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.548 23:26:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.548 23:26:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.548 23:26:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.548 23:26:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.548 23:26:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.548 23:26:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.548 23:26:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.548 23:26:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:18.548 23:26:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:18.548 23:26:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.548 23:26:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.548 23:26:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.548 23:26:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:18.548 23:26:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.548 23:26:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.548 23:26:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.548 23:26:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.548 23:26:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.548 23:26:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.548 23:26:24 -- paths/export.sh@5 -- # export PATH 00:27:18.548 23:26:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.548 23:26:24 -- nvmf/common.sh@46 -- # : 0 00:27:18.548 23:26:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:18.548 23:26:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:18.548 23:26:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:18.548 23:26:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.548 23:26:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.548 23:26:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:18.548 23:26:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:18.548 23:26:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:18.548 23:26:24 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.548 23:26:24 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.548 23:26:24 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:18.548 23:26:24 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:18.548 23:26:24 -- host/failover.sh@18 -- # nvmftestinit 00:27:18.548 23:26:24 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:18.548 23:26:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.548 23:26:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:18.548 23:26:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:18.548 23:26:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:18.548 23:26:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.549 23:26:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.549 23:26:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.549 23:26:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:18.549 23:26:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:18.549 23:26:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:18.549 23:26:24 -- common/autotest_common.sh@10 -- # set +x 00:27:25.204 23:26:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:25.204 23:26:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:25.204 23:26:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:25.204 23:26:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:25.204 23:26:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:25.204 23:26:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:25.204 23:26:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:25.204 23:26:30 -- nvmf/common.sh@294 -- # net_devs=() 00:27:25.204 23:26:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:25.204 23:26:30 -- nvmf/common.sh@295 -- # e810=() 00:27:25.204 23:26:30 -- nvmf/common.sh@295 -- # local -ga e810 00:27:25.204 23:26:30 -- nvmf/common.sh@296 -- # x722=() 00:27:25.204 23:26:30 -- nvmf/common.sh@296 -- # local -ga x722 00:27:25.204 23:26:30 -- nvmf/common.sh@297 -- # mlx=() 00:27:25.204 23:26:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:25.204 23:26:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.204 23:26:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:25.204 23:26:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:25.204 23:26:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:25.204 23:26:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:25.204 23:26:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:25.204 23:26:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:25.204 23:26:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:25.204 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:25.204 23:26:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:25.204 23:26:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:25.204 23:26:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:25.204 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:25.204 23:26:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:25.204 23:26:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:25.204 23:26:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:25.204 23:26:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:25.204 23:26:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.204 23:26:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:25.204 23:26:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.204 23:26:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:25.204 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:25.204 23:26:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.204 23:26:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.205 23:26:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:25.205 23:26:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.205 23:26:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:25.205 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.205 23:26:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:25.205 23:26:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:25.205 23:26:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:25.205 23:26:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:25.205 23:26:30 -- nvmf/common.sh@57 -- # uname 00:27:25.205 23:26:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:25.205 23:26:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:25.205 23:26:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:25.205 23:26:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:25.205 23:26:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:25.205 23:26:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:25.205 23:26:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:25.205 23:26:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:25.205 23:26:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:25.205 23:26:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:25.205 23:26:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:25.205 23:26:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:25.205 23:26:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:25.205 23:26:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:25.205 23:26:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:25.205 23:26:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:25.205 23:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@104 -- # continue 2 00:27:25.205 23:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@104 -- # continue 2 00:27:25.205 23:26:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:25.205 23:26:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:25.205 23:26:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:25.205 23:26:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:25.205 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:25.205 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:25.205 altname enp217s0f0np0 00:27:25.205 altname ens818f0np0 00:27:25.205 inet 192.168.100.8/24 scope global mlx_0_0 00:27:25.205 valid_lft forever preferred_lft forever 00:27:25.205 23:26:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:25.205 23:26:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:25.205 23:26:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:25.205 23:26:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:25.205 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:25.205 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:25.205 altname enp217s0f1np1 00:27:25.205 altname ens818f1np1 00:27:25.205 inet 192.168.100.9/24 scope global mlx_0_1 00:27:25.205 valid_lft forever preferred_lft forever 00:27:25.205 23:26:30 -- nvmf/common.sh@410 -- # return 0 00:27:25.205 23:26:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:25.205 23:26:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:25.205 23:26:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:25.205 23:26:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:25.205 23:26:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:25.205 23:26:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:25.205 23:26:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:25.205 23:26:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:25.205 23:26:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:25.205 23:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@104 -- # continue 2 00:27:25.205 23:26:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.205 23:26:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:25.205 23:26:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@104 -- # continue 2 00:27:25.205 23:26:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:25.205 23:26:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:25.205 23:26:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:25.205 23:26:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:25.205 23:26:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:25.205 23:26:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:25.205 192.168.100.9' 00:27:25.205 23:26:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:25.205 192.168.100.9' 00:27:25.205 23:26:30 -- nvmf/common.sh@445 -- # head -n 1 00:27:25.205 23:26:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:25.205 23:26:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:25.205 192.168.100.9' 00:27:25.205 23:26:30 -- nvmf/common.sh@446 -- # tail -n +2 00:27:25.205 23:26:30 -- nvmf/common.sh@446 -- # head -n 1 00:27:25.205 23:26:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:25.205 23:26:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:25.205 23:26:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:25.205 23:26:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:25.205 23:26:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:25.205 23:26:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:25.205 23:26:30 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:25.205 23:26:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:25.205 23:26:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:25.205 23:26:30 -- common/autotest_common.sh@10 -- # set +x 00:27:25.205 23:26:30 -- nvmf/common.sh@469 -- # nvmfpid=754176 00:27:25.205 23:26:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:25.205 23:26:30 -- nvmf/common.sh@470 -- # waitforlisten 754176 00:27:25.205 23:26:30 -- common/autotest_common.sh@819 -- # '[' -z 754176 ']' 00:27:25.205 23:26:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.205 23:26:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:25.205 23:26:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.205 23:26:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:25.205 23:26:30 -- common/autotest_common.sh@10 -- # set +x 00:27:25.205 [2024-11-02 23:26:30.778726] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:25.205 [2024-11-02 23:26:30.778780] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.205 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.205 [2024-11-02 23:26:30.848322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.205 [2024-11-02 23:26:30.917621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:25.205 [2024-11-02 23:26:30.917754] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.205 [2024-11-02 23:26:30.917764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.205 [2024-11-02 23:26:30.917773] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.205 [2024-11-02 23:26:30.917962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.205 [2024-11-02 23:26:30.917892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.205 [2024-11-02 23:26:30.917960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.151 23:26:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:26.151 23:26:31 -- common/autotest_common.sh@852 -- # return 0 00:27:26.151 23:26:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:26.151 23:26:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:26.151 23:26:31 -- common/autotest_common.sh@10 -- # set +x 00:27:26.151 23:26:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.151 23:26:31 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:26.151 [2024-11-02 23:26:31.844426] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7c1860/0x7c5d50) succeed. 00:27:26.151 [2024-11-02 23:26:31.853601] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7c2db0/0x8073f0) succeed. 00:27:26.410 23:26:31 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:26.410 Malloc0 00:27:26.670 23:26:32 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.670 23:26:32 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:26.929 23:26:32 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:27.188 [2024-11-02 23:26:32.711589] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:27.188 23:26:32 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:27.188 [2024-11-02 23:26:32.887925] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:27.188 23:26:32 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:27.447 [2024-11-02 23:26:33.064562] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:27.447 23:26:33 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:27.447 23:26:33 -- host/failover.sh@31 -- # bdevperf_pid=754502 00:27:27.447 23:26:33 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:27.447 23:26:33 -- host/failover.sh@34 -- # waitforlisten 754502 /var/tmp/bdevperf.sock 00:27:27.447 23:26:33 -- common/autotest_common.sh@819 -- # '[' -z 754502 ']' 00:27:27.447 23:26:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.447 23:26:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:27.447 23:26:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.447 23:26:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:27.447 23:26:33 -- common/autotest_common.sh@10 -- # set +x 00:27:28.384 23:26:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:28.384 23:26:33 -- common/autotest_common.sh@852 -- # return 0 00:27:28.384 23:26:33 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.643 NVMe0n1 00:27:28.643 23:26:34 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.900 00:27:28.900 23:26:34 -- host/failover.sh@39 -- # run_test_pid=754755 00:27:28.900 23:26:34 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:28.900 23:26:34 -- host/failover.sh@41 -- # sleep 1 00:27:29.836 23:26:35 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:30.094 23:26:35 -- host/failover.sh@45 -- # sleep 3 00:27:33.382 23:26:38 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:33.382 00:27:33.382 23:26:38 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:33.382 23:26:39 -- host/failover.sh@50 -- # sleep 3 00:27:36.670 23:26:42 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:36.670 [2024-11-02 23:26:42.282466] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:36.670 23:26:42 -- host/failover.sh@55 -- # sleep 1 00:27:37.607 23:26:43 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:37.866 23:26:43 -- host/failover.sh@59 -- # wait 754755 00:27:44.443 0 00:27:44.443 23:26:49 -- host/failover.sh@61 -- # killprocess 754502 00:27:44.443 23:26:49 -- common/autotest_common.sh@926 -- # '[' -z 754502 ']' 00:27:44.443 23:26:49 -- common/autotest_common.sh@930 -- # kill -0 754502 00:27:44.443 23:26:49 -- common/autotest_common.sh@931 -- # uname 00:27:44.443 23:26:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:44.443 23:26:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 754502 00:27:44.443 23:26:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:44.443 23:26:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:44.443 23:26:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 754502' 00:27:44.443 killing process with pid 754502 00:27:44.443 23:26:49 -- common/autotest_common.sh@945 -- # kill 754502 00:27:44.443 23:26:49 -- common/autotest_common.sh@950 -- # wait 754502 00:27:44.443 23:26:49 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:44.443 [2024-11-02 23:26:33.118096] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:44.443 [2024-11-02 23:26:33.118153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754502 ] 00:27:44.443 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.443 [2024-11-02 23:26:33.189647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.443 [2024-11-02 23:26:33.259089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.443 Running I/O for 15 seconds... 00:27:44.443 [2024-11-02 23:26:36.661591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182700 00:27:44.443 [2024-11-02 23:26:36.661638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182700 00:27:44.443 [2024-11-02 23:26:36.661667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182700 00:27:44.443 [2024-11-02 23:26:36.661687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182700 00:27:44.443 [2024-11-02 23:26:36.661708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-11-02 23:26:36.661727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.443 [2024-11-02 23:26:36.661746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181c00 00:27:44.443 [2024-11-02 23:26:36.661767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182700 00:27:44.443 [2024-11-02 23:26:36.661786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181c00 00:27:44.443 [2024-11-02 23:26:36.661806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.443 [2024-11-02 23:26:36.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.661825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.661850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.661870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.661889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.661909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.661928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.661947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.661970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.661981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.661991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181c00 00:27:44.444 [2024-11-02 23:26:36.662435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.444 [2024-11-02 23:26:36.662510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.444 [2024-11-02 23:26:36.662522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182700 00:27:44.444 [2024-11-02 23:26:36.662531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.662550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.662588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.662628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.662648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.662667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.662705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.662764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.662802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.662821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.662901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.662920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.662939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.662959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.662983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.662993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.663002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.663021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.663040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.663060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.663079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.663098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.663118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.663137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182700 00:27:44.445 [2024-11-02 23:26:36.663157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.663176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.663195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.445 [2024-11-02 23:26:36.663216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181c00 00:27:44.445 [2024-11-02 23:26:36.663235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.445 [2024-11-02 23:26:36.663245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182700 00:27:44.446 [2024-11-02 23:26:36.663755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.446 [2024-11-02 23:26:36.663910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.446 [2024-11-02 23:26:36.663939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181c00 00:27:44.446 [2024-11-02 23:26:36.663948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.663958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:36.663971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.663981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:36.663990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.664001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:36.664009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.664020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182700 00:27:44.447 [2024-11-02 23:26:36.664028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.664039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:36.664048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.664058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:36.664067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.664077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:36.664088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.664098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:36.664109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.666112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.447 [2024-11-02 23:26:36.666126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.447 [2024-11-02 23:26:36.666134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89968 len:8 PRP1 0x0 PRP2 0x0 00:27:44.447 [2024-11-02 23:26:36.666144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:36.666186] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:27:44.447 [2024-11-02 23:26:36.666202] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:44.447 [2024-11-02 23:26:36.666213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.447 [2024-11-02 23:26:36.667998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.447 [2024-11-02 23:26:36.682550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:44.447 [2024-11-02 23:26:36.716014] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.447 [2024-11-02 23:26:40.110492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:40.110538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:40.110788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:40.110827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.110926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.110946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:40.110969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181c00 00:27:44.447 [2024-11-02 23:26:40.110989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.110999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182900 00:27:44.447 [2024-11-02 23:26:40.111008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.111019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.447 [2024-11-02 23:26:40.111028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.447 [2024-11-02 23:26:40.111039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181c00 00:27:44.448 [2024-11-02 23:26:40.111086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181c00 00:27:44.448 [2024-11-02 23:26:40.111146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181c00 00:27:44.448 [2024-11-02 23:26:40.111165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181c00 00:27:44.448 [2024-11-02 23:26:40.111377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182900 00:27:44.448 [2024-11-02 23:26:40.111587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181c00 00:27:44.448 [2024-11-02 23:26:40.111608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-02 23:26:40.111628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181c00 00:27:44.448 [2024-11-02 23:26:40.111647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.448 [2024-11-02 23:26:40.111657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.111704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.111724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.111762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.111781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.111801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.111897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.111936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.111977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.111987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.111996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.112056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.112095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.112116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.112231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.112250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.112289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.112309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182900 00:27:44.449 [2024-11-02 23:26:40.112328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-02 23:26:40.112347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.449 [2024-11-02 23:26:40.112359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181c00 00:27:44.449 [2024-11-02 23:26:40.112368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.112388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.112463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.112793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.112889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.112908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182900 00:27:44.450 [2024-11-02 23:26:40.112927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.112946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.112969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.112981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.450 [2024-11-02 23:26:40.112990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.113000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181c00 00:27:44.450 [2024-11-02 23:26:40.113009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.114906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.450 [2024-11-02 23:26:40.114921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.450 [2024-11-02 23:26:40.114930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58152 len:8 PRP1 0x0 PRP2 0x0 00:27:44.450 [2024-11-02 23:26:40.114939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.450 [2024-11-02 23:26:40.114988] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:44.450 [2024-11-02 23:26:40.114999] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:27:44.450 [2024-11-02 23:26:40.115010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.450 [2024-11-02 23:26:40.116938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.450 [2024-11-02 23:26:40.131387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:44.450 [2024-11-02 23:26:40.161986] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.450 [2024-11-02 23:26:44.482243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181c00 00:27:44.451 [2024-11-02 23:26:44.482925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.482963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182700 00:27:44.451 [2024-11-02 23:26:44.482986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.482997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.483006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.483016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.483025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.451 [2024-11-02 23:26:44.483036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.451 [2024-11-02 23:26:44.483045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.452 [2024-11-02 23:26:44.483608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181c00 00:27:44.452 [2024-11-02 23:26:44.483666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182700 00:27:44.452 [2024-11-02 23:26:44.483685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.452 [2024-11-02 23:26:44.483696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181c00 00:27:44.453 [2024-11-02 23:26:44.483705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.483724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181c00 00:27:44.453 [2024-11-02 23:26:44.483743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.483764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.483783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.483802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.483821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.483840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.483859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.483879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181c00 00:27:44.453 [2024-11-02 23:26:44.483897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181c00 00:27:44.453 [2024-11-02 23:26:44.483916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.483936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.483955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.483979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181c00 00:27:44.453 [2024-11-02 23:26:44.484058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181c00 00:27:44.453 [2024-11-02 23:26:44.484289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d5800 len:0x1000 key:0x182700 00:27:44.453 [2024-11-02 23:26:44.484327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181c00 00:27:44.453 [2024-11-02 23:26:44.484384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.453 [2024-11-02 23:26:44.484425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.453 [2024-11-02 23:26:44.484435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181c00 00:27:44.454 [2024-11-02 23:26:44.484444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.454 [2024-11-02 23:26:44.484503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.454 [2024-11-02 23:26:44.484600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.454 [2024-11-02 23:26:44.484618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181c00 00:27:44.454 [2024-11-02 23:26:44.484637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181c00 00:27:44.454 [2024-11-02 23:26:44.484657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181c00 00:27:44.454 [2024-11-02 23:26:44.484697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181c00 00:27:44.454 [2024-11-02 23:26:44.484717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182700 00:27:44.454 [2024-11-02 23:26:44.484737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.484747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181c00 00:27:44.454 [2024-11-02 23:26:44.484756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f3be000 sqhd:5310 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.486596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.454 [2024-11-02 23:26:44.486609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.454 [2024-11-02 23:26:44.486618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92488 len:8 PRP1 0x0 PRP2 0x0 00:27:44.454 [2024-11-02 23:26:44.486627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.454 [2024-11-02 23:26:44.486668] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:44.454 [2024-11-02 23:26:44.486680] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:27:44.454 [2024-11-02 23:26:44.486690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.454 [2024-11-02 23:26:44.488346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.454 [2024-11-02 23:26:44.502177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:44.454 [2024-11-02 23:26:44.534848] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.454 00:27:44.454 Latency(us) 00:27:44.454 [2024-11-02T22:26:50.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.454 [2024-11-02T22:26:50.211Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:44.454 Verification LBA range: start 0x0 length 0x4000 00:27:44.454 NVMe0n1 : 15.00 20115.52 78.58 294.65 0.00 6257.53 398.13 1020054.73 00:27:44.454 [2024-11-02T22:26:50.211Z] =================================================================================================================== 00:27:44.454 [2024-11-02T22:26:50.211Z] Total : 20115.52 78.58 294.65 0.00 6257.53 398.13 1020054.73 00:27:44.454 Received shutdown signal, test time was about 15.000000 seconds 00:27:44.454 00:27:44.454 Latency(us) 00:27:44.454 [2024-11-02T22:26:50.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.454 [2024-11-02T22:26:50.211Z] =================================================================================================================== 00:27:44.454 [2024-11-02T22:26:50.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.454 23:26:49 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:44.454 23:26:49 -- host/failover.sh@65 -- # count=3 00:27:44.454 23:26:49 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:44.454 23:26:49 -- host/failover.sh@73 -- # bdevperf_pid=757445 00:27:44.454 23:26:49 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:44.454 23:26:49 -- host/failover.sh@75 -- # waitforlisten 757445 /var/tmp/bdevperf.sock 00:27:44.454 23:26:49 -- common/autotest_common.sh@819 -- # '[' -z 757445 ']' 00:27:44.454 23:26:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:44.454 23:26:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:44.454 23:26:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:44.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:44.454 23:26:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:44.454 23:26:49 -- common/autotest_common.sh@10 -- # set +x 00:27:45.391 23:26:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:45.391 23:26:50 -- common/autotest_common.sh@852 -- # return 0 00:27:45.391 23:26:50 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:45.391 [2024-11-02 23:26:50.956351] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:45.391 23:26:50 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:45.650 [2024-11-02 23:26:51.148983] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:45.650 23:26:51 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:45.909 NVMe0n1 00:27:45.909 23:26:51 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:46.168 00:27:46.168 23:26:51 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:46.168 00:27:46.427 23:26:51 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:46.427 23:26:51 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:46.427 23:26:52 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:46.794 23:26:52 -- host/failover.sh@87 -- # sleep 3 00:27:50.106 23:26:55 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:50.106 23:26:55 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:50.106 23:26:55 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:50.106 23:26:55 -- host/failover.sh@90 -- # run_test_pid=758281 00:27:50.106 23:26:55 -- host/failover.sh@92 -- # wait 758281 00:27:51.045 0 00:27:51.045 23:26:56 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:51.045 [2024-11-02 23:26:49.972248] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:51.045 [2024-11-02 23:26:49.972304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757445 ] 00:27:51.045 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.045 [2024-11-02 23:26:50.044727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.045 [2024-11-02 23:26:50.123890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.045 [2024-11-02 23:26:52.270013] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:51.045 [2024-11-02 23:26:52.270651] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:51.045 [2024-11-02 23:26:52.270675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:51.045 [2024-11-02 23:26:52.286245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:51.045 [2024-11-02 23:26:52.302243] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:51.045 Running I/O for 1 seconds... 00:27:51.045 00:27:51.045 Latency(us) 00:27:51.045 [2024-11-02T22:26:56.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.045 [2024-11-02T22:26:56.802Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:51.045 Verification LBA range: start 0x0 length 0x4000 00:27:51.045 NVMe0n1 : 1.00 25273.81 98.73 0.00 0.00 5040.85 1218.97 9175.04 00:27:51.045 [2024-11-02T22:26:56.802Z] =================================================================================================================== 00:27:51.045 [2024-11-02T22:26:56.802Z] Total : 25273.81 98.73 0.00 0.00 5040.85 1218.97 9175.04 00:27:51.045 23:26:56 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:51.045 23:26:56 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:51.305 23:26:56 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.305 23:26:57 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:51.305 23:26:57 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:51.564 23:26:57 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.824 23:26:57 -- host/failover.sh@101 -- # sleep 3 00:27:55.118 23:27:00 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.118 23:27:00 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:55.118 23:27:00 -- host/failover.sh@108 -- # killprocess 757445 00:27:55.118 23:27:00 -- common/autotest_common.sh@926 -- # '[' -z 757445 ']' 00:27:55.118 23:27:00 -- common/autotest_common.sh@930 -- # kill -0 757445 00:27:55.118 23:27:00 -- common/autotest_common.sh@931 -- # uname 00:27:55.118 23:27:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.118 23:27:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 757445 00:27:55.118 23:27:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:55.118 23:27:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:55.118 23:27:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 757445' 00:27:55.118 killing process with pid 757445 00:27:55.118 23:27:00 -- common/autotest_common.sh@945 -- # kill 757445 00:27:55.118 23:27:00 -- common/autotest_common.sh@950 -- # wait 757445 00:27:55.118 23:27:00 -- host/failover.sh@110 -- # sync 00:27:55.118 23:27:00 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.378 23:27:01 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:55.378 23:27:01 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.378 23:27:01 -- host/failover.sh@116 -- # nvmftestfini 00:27:55.378 23:27:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:55.378 23:27:01 -- nvmf/common.sh@116 -- # sync 00:27:55.378 23:27:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:55.378 23:27:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:55.378 23:27:01 -- nvmf/common.sh@119 -- # set +e 00:27:55.378 23:27:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:55.378 23:27:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:55.378 rmmod nvme_rdma 00:27:55.378 rmmod nvme_fabrics 00:27:55.378 23:27:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:55.378 23:27:01 -- nvmf/common.sh@123 -- # set -e 00:27:55.378 23:27:01 -- nvmf/common.sh@124 -- # return 0 00:27:55.378 23:27:01 -- nvmf/common.sh@477 -- # '[' -n 754176 ']' 00:27:55.378 23:27:01 -- nvmf/common.sh@478 -- # killprocess 754176 00:27:55.378 23:27:01 -- common/autotest_common.sh@926 -- # '[' -z 754176 ']' 00:27:55.378 23:27:01 -- common/autotest_common.sh@930 -- # kill -0 754176 00:27:55.378 23:27:01 -- common/autotest_common.sh@931 -- # uname 00:27:55.378 23:27:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.378 23:27:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 754176 00:27:55.638 23:27:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:55.638 23:27:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:55.638 23:27:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 754176' 00:27:55.638 killing process with pid 754176 00:27:55.638 23:27:01 -- common/autotest_common.sh@945 -- # kill 754176 00:27:55.638 23:27:01 -- common/autotest_common.sh@950 -- # wait 754176 00:27:55.898 23:27:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:55.898 23:27:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:55.898 00:27:55.898 real 0m37.517s 00:27:55.898 user 2m5.159s 00:27:55.898 sys 0m7.328s 00:27:55.898 23:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.898 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:27:55.898 ************************************ 00:27:55.898 END TEST nvmf_failover 00:27:55.898 ************************************ 00:27:55.898 23:27:01 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:55.898 23:27:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:55.898 23:27:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.898 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:27:55.898 ************************************ 00:27:55.898 START TEST nvmf_discovery 00:27:55.898 ************************************ 00:27:55.898 23:27:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:55.898 * Looking for test storage... 00:27:55.898 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:55.898 23:27:01 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.898 23:27:01 -- nvmf/common.sh@7 -- # uname -s 00:27:55.898 23:27:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.898 23:27:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.898 23:27:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.898 23:27:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.898 23:27:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.898 23:27:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.898 23:27:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.898 23:27:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.898 23:27:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.898 23:27:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.898 23:27:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:55.898 23:27:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:55.898 23:27:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.898 23:27:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.898 23:27:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.898 23:27:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:55.898 23:27:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.899 23:27:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.899 23:27:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.899 23:27:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.899 23:27:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.899 23:27:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.899 23:27:01 -- paths/export.sh@5 -- # export PATH 00:27:55.899 23:27:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.899 23:27:01 -- nvmf/common.sh@46 -- # : 0 00:27:55.899 23:27:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:55.899 23:27:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:55.899 23:27:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:55.899 23:27:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.899 23:27:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.899 23:27:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:55.899 23:27:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:55.899 23:27:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:55.899 23:27:01 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:27:55.899 23:27:01 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:55.899 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:55.899 23:27:01 -- host/discovery.sh@13 -- # exit 0 00:27:55.899 00:27:55.899 real 0m0.095s 00:27:55.899 user 0m0.026s 00:27:55.899 sys 0m0.073s 00:27:55.899 23:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.899 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:27:55.899 ************************************ 00:27:55.899 END TEST nvmf_discovery 00:27:55.899 ************************************ 00:27:55.899 23:27:01 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:27:55.899 23:27:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:55.899 23:27:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.899 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:27:56.159 ************************************ 00:27:56.159 START TEST nvmf_discovery_remove_ifc 00:27:56.159 ************************************ 00:27:56.159 23:27:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:27:56.159 * Looking for test storage... 00:27:56.159 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:56.159 23:27:01 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.159 23:27:01 -- nvmf/common.sh@7 -- # uname -s 00:27:56.159 23:27:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.159 23:27:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.159 23:27:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.159 23:27:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.159 23:27:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.159 23:27:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.159 23:27:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.159 23:27:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.159 23:27:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.159 23:27:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.159 23:27:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:56.159 23:27:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:56.159 23:27:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.159 23:27:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.159 23:27:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.159 23:27:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:56.159 23:27:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.159 23:27:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.159 23:27:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.159 23:27:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.159 23:27:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.159 23:27:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.159 23:27:01 -- paths/export.sh@5 -- # export PATH 00:27:56.159 23:27:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.159 23:27:01 -- nvmf/common.sh@46 -- # : 0 00:27:56.159 23:27:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:56.159 23:27:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:56.159 23:27:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:56.159 23:27:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.159 23:27:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.159 23:27:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:56.159 23:27:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:56.159 23:27:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:56.159 23:27:01 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:27:56.159 23:27:01 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:56.159 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:56.159 23:27:01 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:27:56.159 00:27:56.159 real 0m0.124s 00:27:56.159 user 0m0.053s 00:27:56.159 sys 0m0.081s 00:27:56.159 23:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.159 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:27:56.159 ************************************ 00:27:56.159 END TEST nvmf_discovery_remove_ifc 00:27:56.159 ************************************ 00:27:56.159 23:27:01 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:27:56.159 23:27:01 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:27:56.159 23:27:01 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:27:56.159 23:27:01 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:27:56.159 23:27:01 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:56.159 23:27:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:56.159 23:27:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:56.159 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:27:56.159 ************************************ 00:27:56.159 START TEST nvmf_bdevperf 00:27:56.160 ************************************ 00:27:56.160 23:27:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:56.160 * Looking for test storage... 00:27:56.419 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:56.419 23:27:01 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.419 23:27:01 -- nvmf/common.sh@7 -- # uname -s 00:27:56.420 23:27:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.420 23:27:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.420 23:27:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.420 23:27:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.420 23:27:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.420 23:27:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.420 23:27:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.420 23:27:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.420 23:27:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.420 23:27:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.420 23:27:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:56.420 23:27:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:56.420 23:27:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.420 23:27:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.420 23:27:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.420 23:27:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:56.420 23:27:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.420 23:27:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.420 23:27:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.420 23:27:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.420 23:27:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.420 23:27:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.420 23:27:01 -- paths/export.sh@5 -- # export PATH 00:27:56.420 23:27:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.420 23:27:01 -- nvmf/common.sh@46 -- # : 0 00:27:56.420 23:27:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:56.420 23:27:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:56.420 23:27:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:56.420 23:27:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.420 23:27:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.420 23:27:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:56.420 23:27:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:56.420 23:27:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:56.420 23:27:01 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.420 23:27:01 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.420 23:27:01 -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:56.420 23:27:01 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:56.420 23:27:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.420 23:27:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:56.420 23:27:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:56.420 23:27:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:56.420 23:27:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.420 23:27:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.420 23:27:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.420 23:27:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:56.420 23:27:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:56.420 23:27:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:56.420 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:28:03.011 23:27:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:03.011 23:27:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:03.011 23:27:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:03.011 23:27:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:03.011 23:27:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:03.011 23:27:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:03.011 23:27:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:03.011 23:27:08 -- nvmf/common.sh@294 -- # net_devs=() 00:28:03.011 23:27:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:03.011 23:27:08 -- nvmf/common.sh@295 -- # e810=() 00:28:03.011 23:27:08 -- nvmf/common.sh@295 -- # local -ga e810 00:28:03.011 23:27:08 -- nvmf/common.sh@296 -- # x722=() 00:28:03.011 23:27:08 -- nvmf/common.sh@296 -- # local -ga x722 00:28:03.011 23:27:08 -- nvmf/common.sh@297 -- # mlx=() 00:28:03.011 23:27:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:03.011 23:27:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.011 23:27:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:03.011 23:27:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:03.011 23:27:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:03.011 23:27:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:03.011 23:27:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:03.011 23:27:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.011 23:27:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:03.011 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:03.011 23:27:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.011 23:27:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.011 23:27:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:03.011 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:03.011 23:27:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.011 23:27:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:03.011 23:27:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.011 23:27:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.011 23:27:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.011 23:27:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.011 23:27:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:03.011 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:03.011 23:27:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.011 23:27:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.011 23:27:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.011 23:27:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.011 23:27:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.011 23:27:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:03.011 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:03.011 23:27:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.011 23:27:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:03.011 23:27:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:03.011 23:27:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:03.011 23:27:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:03.011 23:27:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:03.011 23:27:08 -- nvmf/common.sh@57 -- # uname 00:28:03.011 23:27:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:03.011 23:27:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:03.011 23:27:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:03.011 23:27:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:03.011 23:27:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:03.011 23:27:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:03.011 23:27:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:03.011 23:27:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:03.011 23:27:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:03.012 23:27:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:03.012 23:27:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:03.012 23:27:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.012 23:27:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:03.012 23:27:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:03.012 23:27:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.012 23:27:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:03.012 23:27:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@104 -- # continue 2 00:28:03.012 23:27:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@104 -- # continue 2 00:28:03.012 23:27:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:03.012 23:27:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.012 23:27:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:03.012 23:27:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:03.012 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.012 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:03.012 altname enp217s0f0np0 00:28:03.012 altname ens818f0np0 00:28:03.012 inet 192.168.100.8/24 scope global mlx_0_0 00:28:03.012 valid_lft forever preferred_lft forever 00:28:03.012 23:27:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:03.012 23:27:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.012 23:27:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:03.012 23:27:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:03.012 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.012 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:03.012 altname enp217s0f1np1 00:28:03.012 altname ens818f1np1 00:28:03.012 inet 192.168.100.9/24 scope global mlx_0_1 00:28:03.012 valid_lft forever preferred_lft forever 00:28:03.012 23:27:08 -- nvmf/common.sh@410 -- # return 0 00:28:03.012 23:27:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:03.012 23:27:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:03.012 23:27:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:03.012 23:27:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:03.012 23:27:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.012 23:27:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:03.012 23:27:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:03.012 23:27:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.012 23:27:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:03.012 23:27:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@104 -- # continue 2 00:28:03.012 23:27:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.012 23:27:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.012 23:27:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@104 -- # continue 2 00:28:03.012 23:27:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:03.012 23:27:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.012 23:27:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:03.012 23:27:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.012 23:27:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.012 23:27:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:03.012 192.168.100.9' 00:28:03.012 23:27:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:03.012 192.168.100.9' 00:28:03.012 23:27:08 -- nvmf/common.sh@445 -- # head -n 1 00:28:03.012 23:27:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:03.012 23:27:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:03.012 192.168.100.9' 00:28:03.012 23:27:08 -- nvmf/common.sh@446 -- # tail -n +2 00:28:03.012 23:27:08 -- nvmf/common.sh@446 -- # head -n 1 00:28:03.012 23:27:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:03.012 23:27:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:03.012 23:27:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:03.012 23:27:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:03.012 23:27:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:03.012 23:27:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:03.012 23:27:08 -- host/bdevperf.sh@25 -- # tgt_init 00:28:03.012 23:27:08 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:03.012 23:27:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:03.012 23:27:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:03.012 23:27:08 -- common/autotest_common.sh@10 -- # set +x 00:28:03.012 23:27:08 -- nvmf/common.sh@469 -- # nvmfpid=763264 00:28:03.012 23:27:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:03.012 23:27:08 -- nvmf/common.sh@470 -- # waitforlisten 763264 00:28:03.012 23:27:08 -- common/autotest_common.sh@819 -- # '[' -z 763264 ']' 00:28:03.012 23:27:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.012 23:27:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:03.012 23:27:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.012 23:27:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:03.012 23:27:08 -- common/autotest_common.sh@10 -- # set +x 00:28:03.012 [2024-11-02 23:27:08.681766] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:03.012 [2024-11-02 23:27:08.681816] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.012 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.012 [2024-11-02 23:27:08.752789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.272 [2024-11-02 23:27:08.827328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:03.272 [2024-11-02 23:27:08.827431] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.272 [2024-11-02 23:27:08.827440] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.272 [2024-11-02 23:27:08.827449] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.272 [2024-11-02 23:27:08.827550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.272 [2024-11-02 23:27:08.827634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.272 [2024-11-02 23:27:08.827636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.841 23:27:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:03.841 23:27:09 -- common/autotest_common.sh@852 -- # return 0 00:28:03.841 23:27:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:03.841 23:27:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:03.841 23:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:03.841 23:27:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.841 23:27:09 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:03.841 23:27:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.841 23:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:03.841 [2024-11-02 23:27:09.576684] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23fc860/0x2400d50) succeed. 00:28:03.841 [2024-11-02 23:27:09.585840] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23fddb0/0x24423f0) succeed. 00:28:04.103 23:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.103 23:27:09 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:04.103 23:27:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.103 23:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:04.103 Malloc0 00:28:04.103 23:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.103 23:27:09 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.103 23:27:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.103 23:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:04.103 23:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.103 23:27:09 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.103 23:27:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.103 23:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:04.103 23:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.103 23:27:09 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:04.103 23:27:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.103 23:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:04.103 [2024-11-02 23:27:09.736443] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:04.103 23:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.103 23:27:09 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:04.103 23:27:09 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:04.103 23:27:09 -- nvmf/common.sh@520 -- # config=() 00:28:04.103 23:27:09 -- nvmf/common.sh@520 -- # local subsystem config 00:28:04.103 23:27:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:04.103 23:27:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:04.103 { 00:28:04.103 "params": { 00:28:04.103 "name": "Nvme$subsystem", 00:28:04.103 "trtype": "$TEST_TRANSPORT", 00:28:04.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.103 "adrfam": "ipv4", 00:28:04.103 "trsvcid": "$NVMF_PORT", 00:28:04.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.104 "hdgst": ${hdgst:-false}, 00:28:04.104 "ddgst": ${ddgst:-false} 00:28:04.104 }, 00:28:04.104 "method": "bdev_nvme_attach_controller" 00:28:04.104 } 00:28:04.104 EOF 00:28:04.104 )") 00:28:04.104 23:27:09 -- nvmf/common.sh@542 -- # cat 00:28:04.104 23:27:09 -- nvmf/common.sh@544 -- # jq . 00:28:04.104 23:27:09 -- nvmf/common.sh@545 -- # IFS=, 00:28:04.104 23:27:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:04.104 "params": { 00:28:04.104 "name": "Nvme1", 00:28:04.104 "trtype": "rdma", 00:28:04.104 "traddr": "192.168.100.8", 00:28:04.104 "adrfam": "ipv4", 00:28:04.104 "trsvcid": "4420", 00:28:04.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.104 "hdgst": false, 00:28:04.104 "ddgst": false 00:28:04.104 }, 00:28:04.104 "method": "bdev_nvme_attach_controller" 00:28:04.104 }' 00:28:04.104 [2024-11-02 23:27:09.786255] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:04.104 [2024-11-02 23:27:09.786301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763458 ] 00:28:04.104 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.104 [2024-11-02 23:27:09.856046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.372 [2024-11-02 23:27:09.924234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.372 Running I/O for 1 seconds... 00:28:05.751 00:28:05.751 Latency(us) 00:28:05.751 [2024-11-02T22:27:11.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.751 [2024-11-02T22:27:11.508Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:05.751 Verification LBA range: start 0x0 length 0x4000 00:28:05.751 Nvme1n1 : 1.00 25782.23 100.71 0.00 0.00 4940.46 1291.06 11953.77 00:28:05.751 [2024-11-02T22:27:11.508Z] =================================================================================================================== 00:28:05.751 [2024-11-02T22:27:11.508Z] Total : 25782.23 100.71 0.00 0.00 4940.46 1291.06 11953.77 00:28:05.751 23:27:11 -- host/bdevperf.sh@30 -- # bdevperfpid=763734 00:28:05.751 23:27:11 -- host/bdevperf.sh@32 -- # sleep 3 00:28:05.751 23:27:11 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:05.751 23:27:11 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:05.751 23:27:11 -- nvmf/common.sh@520 -- # config=() 00:28:05.751 23:27:11 -- nvmf/common.sh@520 -- # local subsystem config 00:28:05.751 23:27:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:05.751 23:27:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:05.751 { 00:28:05.751 "params": { 00:28:05.751 "name": "Nvme$subsystem", 00:28:05.751 "trtype": "$TEST_TRANSPORT", 00:28:05.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.751 "adrfam": "ipv4", 00:28:05.751 "trsvcid": "$NVMF_PORT", 00:28:05.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.751 "hdgst": ${hdgst:-false}, 00:28:05.751 "ddgst": ${ddgst:-false} 00:28:05.751 }, 00:28:05.751 "method": "bdev_nvme_attach_controller" 00:28:05.751 } 00:28:05.751 EOF 00:28:05.751 )") 00:28:05.751 23:27:11 -- nvmf/common.sh@542 -- # cat 00:28:05.751 23:27:11 -- nvmf/common.sh@544 -- # jq . 00:28:05.751 23:27:11 -- nvmf/common.sh@545 -- # IFS=, 00:28:05.751 23:27:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:05.751 "params": { 00:28:05.751 "name": "Nvme1", 00:28:05.751 "trtype": "rdma", 00:28:05.751 "traddr": "192.168.100.8", 00:28:05.751 "adrfam": "ipv4", 00:28:05.751 "trsvcid": "4420", 00:28:05.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.751 "hdgst": false, 00:28:05.751 "ddgst": false 00:28:05.751 }, 00:28:05.751 "method": "bdev_nvme_attach_controller" 00:28:05.751 }' 00:28:05.751 [2024-11-02 23:27:11.368913] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:05.751 [2024-11-02 23:27:11.368975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763734 ] 00:28:05.751 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.751 [2024-11-02 23:27:11.439998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.751 [2024-11-02 23:27:11.503254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.010 Running I/O for 15 seconds... 00:28:09.299 23:27:14 -- host/bdevperf.sh@33 -- # kill -9 763264 00:28:09.299 23:27:14 -- host/bdevperf.sh@35 -- # sleep 3 00:28:09.868 [2024-11-02 23:27:15.355822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181c00 00:28:09.868 [2024-11-02 23:27:15.355859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.355878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181c00 00:28:09.868 [2024-11-02 23:27:15.355890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.355902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182900 00:28:09.868 [2024-11-02 23:27:15.355911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.355922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181c00 00:28:09.868 [2024-11-02 23:27:15.355936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.355946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181c00 00:28:09.868 [2024-11-02 23:27:15.355955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.355971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.868 [2024-11-02 23:27:15.355981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.355992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181c00 00:28:09.868 [2024-11-02 23:27:15.356001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.356012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182900 00:28:09.868 [2024-11-02 23:27:15.356020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.356031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181c00 00:28:09.868 [2024-11-02 23:27:15.356040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.356051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181c00 00:28:09.868 [2024-11-02 23:27:15.356060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.868 [2024-11-02 23:27:15.356071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181c00 00:28:09.869 [2024-11-02 23:27:15.356751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182900 00:28:09.869 [2024-11-02 23:27:15.356770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.869 [2024-11-02 23:27:15.356789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.869 [2024-11-02 23:27:15.356800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.356809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.356829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.356849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.356870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.356890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.356909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.356928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.356948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.356958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181c00 00:28:09.870 [2024-11-02 23:27:15.357612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.870 [2024-11-02 23:27:15.357651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x182900 00:28:09.870 [2024-11-02 23:27:15.357671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.870 [2024-11-02 23:27:15.357682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.357691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.871 [2024-11-02 23:27:15.357711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.357730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.871 [2024-11-02 23:27:15.357750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.357769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.357789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.357809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.357828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.357848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.357867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.357887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.357906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.357926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.871 [2024-11-02 23:27:15.357949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.357974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.357985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.357995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.871 [2024-11-02 23:27:15.358014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.358034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.871 [2024-11-02 23:27:15.358054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.358093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.358151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.358192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.358211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.358231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.871 [2024-11-02 23:27:15.358273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182900 00:28:09.871 [2024-11-02 23:27:15.358332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181c00 00:28:09.871 [2024-11-02 23:27:15.358412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.871 [2024-11-02 23:27:15.358423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.871 [2024-11-02 23:27:15.358431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.872 [2024-11-02 23:27:15.358442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.872 [2024-11-02 23:27:15.358451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.872 [2024-11-02 23:27:15.358461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182900 00:28:09.872 [2024-11-02 23:27:15.358470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.872 [2024-11-02 23:27:15.358481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.872 [2024-11-02 23:27:15.358490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.872 [2024-11-02 23:27:15.358501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181c00 00:28:09.872 [2024-11-02 23:27:15.358509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.872 [2024-11-02 23:27:15.358520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181c00 00:28:09.872 [2024-11-02 23:27:15.358529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:de5b7000 sqhd:5310 p:0 m:0 dnr:0 00:28:09.872 [2024-11-02 23:27:15.360416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:09.872 [2024-11-02 23:27:15.360429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:09.872 [2024-11-02 23:27:15.360438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22544 len:8 PRP1 0x0 PRP2 0x0 00:28:09.872 [2024-11-02 23:27:15.360447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.872 [2024-11-02 23:27:15.360491] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:09.872 [2024-11-02 23:27:15.362191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.872 [2024-11-02 23:27:15.376513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.872 [2024-11-02 23:27:15.379670] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:09.872 [2024-11-02 23:27:15.379690] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:09.872 [2024-11-02 23:27:15.379699] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:10.808 [2024-11-02 23:27:16.383734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:10.808 [2024-11-02 23:27:16.383759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.808 [2024-11-02 23:27:16.383860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.808 [2024-11-02 23:27:16.383871] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.808 [2024-11-02 23:27:16.383881] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:10.808 [2024-11-02 23:27:16.383895] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:10.808 [2024-11-02 23:27:16.385647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.808 [2024-11-02 23:27:16.395716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.808 [2024-11-02 23:27:16.397878] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:10.808 [2024-11-02 23:27:16.397897] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:10.808 [2024-11-02 23:27:16.397905] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:11.745 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 763264 Killed "${NVMF_APP[@]}" "$@" 00:28:11.745 23:27:17 -- host/bdevperf.sh@36 -- # tgt_init 00:28:11.745 23:27:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:11.745 23:27:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:11.745 23:27:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:11.745 23:27:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.745 23:27:17 -- nvmf/common.sh@469 -- # nvmfpid=764817 00:28:11.745 23:27:17 -- nvmf/common.sh@470 -- # waitforlisten 764817 00:28:11.745 23:27:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:11.745 23:27:17 -- common/autotest_common.sh@819 -- # '[' -z 764817 ']' 00:28:11.745 23:27:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.745 23:27:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:11.745 23:27:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.746 23:27:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:11.746 23:27:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.746 [2024-11-02 23:27:17.393946] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:11.746 [2024-11-02 23:27:17.394015] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.746 [2024-11-02 23:27:17.401806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:11.746 [2024-11-02 23:27:17.401832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:11.746 [2024-11-02 23:27:17.401934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:11.746 [2024-11-02 23:27:17.401945] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:11.746 [2024-11-02 23:27:17.401956] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:11.746 [2024-11-02 23:27:17.403118] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.746 [2024-11-02 23:27:17.403701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.746 [2024-11-02 23:27:17.414982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.746 [2024-11-02 23:27:17.417122] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:11.746 [2024-11-02 23:27:17.417146] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:11.746 [2024-11-02 23:27:17.417155] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:11.746 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.746 [2024-11-02 23:27:17.465932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:12.005 [2024-11-02 23:27:17.539608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:12.005 [2024-11-02 23:27:17.539714] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.005 [2024-11-02 23:27:17.539725] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.005 [2024-11-02 23:27:17.539733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.005 [2024-11-02 23:27:17.539777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.005 [2024-11-02 23:27:17.539874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.005 [2024-11-02 23:27:17.539876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.575 23:27:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:12.575 23:27:18 -- common/autotest_common.sh@852 -- # return 0 00:28:12.575 23:27:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:12.575 23:27:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:12.575 23:27:18 -- common/autotest_common.sh@10 -- # set +x 00:28:12.575 23:27:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.575 23:27:18 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:12.575 23:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.575 23:27:18 -- common/autotest_common.sh@10 -- # set +x 00:28:12.575 [2024-11-02 23:27:18.298524] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18b4860/0x18b8d50) succeed. 00:28:12.575 [2024-11-02 23:27:18.307825] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18b5db0/0x18fa3f0) succeed. 00:28:12.836 23:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.836 23:27:18 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:12.836 23:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.836 23:27:18 -- common/autotest_common.sh@10 -- # set +x 00:28:12.836 [2024-11-02 23:27:18.421125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.836 [2024-11-02 23:27:18.421163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:12.836 [2024-11-02 23:27:18.421280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:12.836 [2024-11-02 23:27:18.421291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:12.836 [2024-11-02 23:27:18.421303] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:12.836 [2024-11-02 23:27:18.422363] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.836 [2024-11-02 23:27:18.422918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.836 Malloc0 00:28:12.836 23:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.836 23:27:18 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.836 23:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.836 23:27:18 -- common/autotest_common.sh@10 -- # set +x 00:28:12.836 [2024-11-02 23:27:18.434308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:12.836 [2024-11-02 23:27:18.436493] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:12.836 [2024-11-02 23:27:18.436514] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:12.836 [2024-11-02 23:27:18.436527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:12.836 23:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.836 23:27:18 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:12.836 23:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.836 23:27:18 -- common/autotest_common.sh@10 -- # set +x 00:28:12.836 23:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.836 23:27:18 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:12.836 23:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.836 23:27:18 -- common/autotest_common.sh@10 -- # set +x 00:28:12.836 [2024-11-02 23:27:18.451016] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:12.836 23:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.836 23:27:18 -- host/bdevperf.sh@38 -- # wait 763734 00:28:13.774 [2024-11-02 23:27:19.440452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:13.774 [2024-11-02 23:27:19.440474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.774 [2024-11-02 23:27:19.440587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:13.774 [2024-11-02 23:27:19.440598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:13.774 [2024-11-02 23:27:19.440609] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:13.774 [2024-11-02 23:27:19.442181] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:13.774 [2024-11-02 23:27:19.442278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.774 [2024-11-02 23:27:19.453925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.774 [2024-11-02 23:27:19.484675] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:21.912 00:28:21.912 Latency(us) 00:28:21.912 [2024-11-02T22:27:27.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.912 [2024-11-02T22:27:27.669Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:21.912 Verification LBA range: start 0x0 length 0x4000 00:28:21.912 Nvme1n1 : 15.00 16870.36 65.90 21930.36 0.00 3288.00 367.00 1033476.51 00:28:21.912 [2024-11-02T22:27:27.669Z] =================================================================================================================== 00:28:21.912 [2024-11-02T22:27:27.669Z] Total : 16870.36 65.90 21930.36 0.00 3288.00 367.00 1033476.51 00:28:21.912 23:27:26 -- host/bdevperf.sh@39 -- # sync 00:28:21.912 23:27:26 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.912 23:27:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.912 23:27:26 -- common/autotest_common.sh@10 -- # set +x 00:28:21.912 23:27:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.912 23:27:26 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:21.912 23:27:26 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:21.912 23:27:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:21.912 23:27:26 -- nvmf/common.sh@116 -- # sync 00:28:21.912 23:27:26 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:21.912 23:27:26 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:21.912 23:27:26 -- nvmf/common.sh@119 -- # set +e 00:28:21.912 23:27:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:21.912 23:27:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:21.912 rmmod nvme_rdma 00:28:21.912 rmmod nvme_fabrics 00:28:21.912 23:27:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:21.912 23:27:27 -- nvmf/common.sh@123 -- # set -e 00:28:21.912 23:27:27 -- nvmf/common.sh@124 -- # return 0 00:28:21.912 23:27:27 -- nvmf/common.sh@477 -- # '[' -n 764817 ']' 00:28:21.912 23:27:27 -- nvmf/common.sh@478 -- # killprocess 764817 00:28:21.912 23:27:27 -- common/autotest_common.sh@926 -- # '[' -z 764817 ']' 00:28:21.912 23:27:27 -- common/autotest_common.sh@930 -- # kill -0 764817 00:28:21.912 23:27:27 -- common/autotest_common.sh@931 -- # uname 00:28:21.912 23:27:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:21.912 23:27:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 764817 00:28:21.912 23:27:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:21.912 23:27:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:21.912 23:27:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 764817' 00:28:21.912 killing process with pid 764817 00:28:21.912 23:27:27 -- common/autotest_common.sh@945 -- # kill 764817 00:28:21.912 23:27:27 -- common/autotest_common.sh@950 -- # wait 764817 00:28:21.912 23:27:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:21.912 23:27:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:21.912 00:28:21.912 real 0m25.535s 00:28:21.912 user 1m4.815s 00:28:21.912 sys 0m6.241s 00:28:21.912 23:27:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.912 23:27:27 -- common/autotest_common.sh@10 -- # set +x 00:28:21.912 ************************************ 00:28:21.912 END TEST nvmf_bdevperf 00:28:21.912 ************************************ 00:28:21.912 23:27:27 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:21.912 23:27:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:21.912 23:27:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:21.912 23:27:27 -- common/autotest_common.sh@10 -- # set +x 00:28:21.912 ************************************ 00:28:21.912 START TEST nvmf_target_disconnect 00:28:21.912 ************************************ 00:28:21.912 23:27:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:21.912 * Looking for test storage... 00:28:21.912 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:21.912 23:27:27 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.912 23:27:27 -- nvmf/common.sh@7 -- # uname -s 00:28:21.912 23:27:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.912 23:27:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.912 23:27:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.912 23:27:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.912 23:27:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.912 23:27:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.912 23:27:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.912 23:27:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.912 23:27:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.912 23:27:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.912 23:27:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:21.912 23:27:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:21.912 23:27:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.912 23:27:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.912 23:27:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.912 23:27:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:21.912 23:27:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.912 23:27:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.912 23:27:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.912 23:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.912 23:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.912 23:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.912 23:27:27 -- paths/export.sh@5 -- # export PATH 00:28:21.912 23:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.912 23:27:27 -- nvmf/common.sh@46 -- # : 0 00:28:21.912 23:27:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:21.912 23:27:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:21.912 23:27:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:21.912 23:27:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.912 23:27:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.912 23:27:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:21.912 23:27:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:21.912 23:27:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:21.912 23:27:27 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:21.912 23:27:27 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:21.912 23:27:27 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:21.913 23:27:27 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:21.913 23:27:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:21.913 23:27:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.913 23:27:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:21.913 23:27:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:21.913 23:27:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:21.913 23:27:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.913 23:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.913 23:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.913 23:27:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:21.913 23:27:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:21.913 23:27:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:21.913 23:27:27 -- common/autotest_common.sh@10 -- # set +x 00:28:28.487 23:27:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:28.487 23:27:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:28.487 23:27:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:28.487 23:27:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:28.487 23:27:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:28.487 23:27:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:28.487 23:27:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:28.487 23:27:33 -- nvmf/common.sh@294 -- # net_devs=() 00:28:28.487 23:27:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:28.487 23:27:33 -- nvmf/common.sh@295 -- # e810=() 00:28:28.487 23:27:33 -- nvmf/common.sh@295 -- # local -ga e810 00:28:28.487 23:27:33 -- nvmf/common.sh@296 -- # x722=() 00:28:28.487 23:27:33 -- nvmf/common.sh@296 -- # local -ga x722 00:28:28.487 23:27:33 -- nvmf/common.sh@297 -- # mlx=() 00:28:28.487 23:27:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:28.487 23:27:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.487 23:27:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.487 23:27:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.487 23:27:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.487 23:27:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.487 23:27:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.487 23:27:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.487 23:27:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.488 23:27:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.488 23:27:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.488 23:27:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.488 23:27:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:28.488 23:27:33 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:28.488 23:27:33 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:28.488 23:27:33 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:28.488 23:27:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:28.488 23:27:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:28.488 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:28.488 23:27:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:28.488 23:27:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:28.488 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:28.488 23:27:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:28.488 23:27:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:28.488 23:27:33 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.488 23:27:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.488 23:27:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.488 23:27:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:28.488 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:28.488 23:27:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.488 23:27:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.488 23:27:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.488 23:27:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.488 23:27:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:28.488 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:28.488 23:27:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.488 23:27:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:28.488 23:27:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:28.488 23:27:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:28.488 23:27:33 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:28.488 23:27:33 -- nvmf/common.sh@57 -- # uname 00:28:28.488 23:27:33 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:28.488 23:27:33 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:28.488 23:27:33 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:28.488 23:27:33 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:28.488 23:27:33 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:28.488 23:27:33 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:28.488 23:27:33 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:28.488 23:27:33 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:28.488 23:27:33 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:28.488 23:27:33 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:28.488 23:27:33 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:28.488 23:27:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:28.488 23:27:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:28.488 23:27:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:28.488 23:27:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:28.488 23:27:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:28.488 23:27:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:28.488 23:27:33 -- nvmf/common.sh@104 -- # continue 2 00:28:28.488 23:27:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.488 23:27:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:28.488 23:27:33 -- nvmf/common.sh@104 -- # continue 2 00:28:28.488 23:27:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:28.488 23:27:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:28.488 23:27:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:28.488 23:27:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:28.488 23:27:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:28.488 23:27:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:28.488 23:27:33 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:28.488 23:27:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:28.488 23:27:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:28.488 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:28.488 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:28.488 altname enp217s0f0np0 00:28:28.488 altname ens818f0np0 00:28:28.488 inet 192.168.100.8/24 scope global mlx_0_0 00:28:28.488 valid_lft forever preferred_lft forever 00:28:28.488 23:27:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:28.488 23:27:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:28.488 23:27:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:28.488 23:27:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:28.488 23:27:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:28.488 23:27:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:28.488 23:27:34 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:28.488 23:27:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:28.488 23:27:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:28.488 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:28.488 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:28.488 altname enp217s0f1np1 00:28:28.488 altname ens818f1np1 00:28:28.488 inet 192.168.100.9/24 scope global mlx_0_1 00:28:28.488 valid_lft forever preferred_lft forever 00:28:28.488 23:27:34 -- nvmf/common.sh@410 -- # return 0 00:28:28.488 23:27:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:28.488 23:27:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:28.488 23:27:34 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:28.488 23:27:34 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:28.488 23:27:34 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:28.488 23:27:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:28.488 23:27:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:28.488 23:27:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:28.488 23:27:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:28.488 23:27:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:28.488 23:27:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:28.488 23:27:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.488 23:27:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:28.488 23:27:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:28.488 23:27:34 -- nvmf/common.sh@104 -- # continue 2 00:28:28.488 23:27:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:28.488 23:27:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.488 23:27:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:28.488 23:27:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.488 23:27:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:28.488 23:27:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:28.488 23:27:34 -- nvmf/common.sh@104 -- # continue 2 00:28:28.489 23:27:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:28.489 23:27:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:28.489 23:27:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:28.489 23:27:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:28.489 23:27:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:28.489 23:27:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:28.489 23:27:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:28.489 23:27:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:28.489 23:27:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:28.489 23:27:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:28.489 23:27:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:28.489 23:27:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:28.489 23:27:34 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:28.489 192.168.100.9' 00:28:28.489 23:27:34 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:28.489 192.168.100.9' 00:28:28.489 23:27:34 -- nvmf/common.sh@445 -- # head -n 1 00:28:28.489 23:27:34 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:28.489 23:27:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:28.489 192.168.100.9' 00:28:28.489 23:27:34 -- nvmf/common.sh@446 -- # head -n 1 00:28:28.489 23:27:34 -- nvmf/common.sh@446 -- # tail -n +2 00:28:28.489 23:27:34 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:28.489 23:27:34 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:28.489 23:27:34 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:28.489 23:27:34 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:28.489 23:27:34 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:28.489 23:27:34 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:28.489 23:27:34 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:28.489 23:27:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:28.489 23:27:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:28.489 23:27:34 -- common/autotest_common.sh@10 -- # set +x 00:28:28.489 ************************************ 00:28:28.489 START TEST nvmf_target_disconnect_tc1 00:28:28.489 ************************************ 00:28:28.489 23:27:34 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:28:28.489 23:27:34 -- host/target_disconnect.sh@32 -- # set +e 00:28:28.489 23:27:34 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:28.489 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.748 [2024-11-02 23:27:34.250707] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:28.748 [2024-11-02 23:27:34.250752] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:28.748 [2024-11-02 23:27:34.250761] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:28:29.686 [2024-11-02 23:27:35.254880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:29.686 [2024-11-02 23:27:35.254944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:29.686 [2024-11-02 23:27:35.254989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:28:29.686 [2024-11-02 23:27:35.255048] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:29.686 [2024-11-02 23:27:35.255058] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:29.686 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:28:29.686 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:29.686 Initializing NVMe Controllers 00:28:29.686 23:27:35 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:29.686 23:27:35 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:29.686 23:27:35 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:28:29.686 23:27:35 -- common/autotest_common.sh@1132 -- # return 0 00:28:29.686 23:27:35 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:29.686 23:27:35 -- host/target_disconnect.sh@41 -- # set -e 00:28:29.686 00:28:29.686 real 0m1.133s 00:28:29.686 user 0m0.863s 00:28:29.686 sys 0m0.259s 00:28:29.686 23:27:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.686 23:27:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.686 ************************************ 00:28:29.686 END TEST nvmf_target_disconnect_tc1 00:28:29.686 ************************************ 00:28:29.686 23:27:35 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:29.686 23:27:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:29.686 23:27:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.686 23:27:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.686 ************************************ 00:28:29.686 START TEST nvmf_target_disconnect_tc2 00:28:29.686 ************************************ 00:28:29.686 23:27:35 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:28:29.686 23:27:35 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:28:29.686 23:27:35 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:29.686 23:27:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:29.686 23:27:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:29.686 23:27:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.686 23:27:35 -- nvmf/common.sh@469 -- # nvmfpid=769926 00:28:29.686 23:27:35 -- nvmf/common.sh@470 -- # waitforlisten 769926 00:28:29.686 23:27:35 -- common/autotest_common.sh@819 -- # '[' -z 769926 ']' 00:28:29.686 23:27:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.686 23:27:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:29.686 23:27:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.686 23:27:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:29.686 23:27:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.686 23:27:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:29.686 [2024-11-02 23:27:35.360623] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:29.686 [2024-11-02 23:27:35.360676] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.686 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.946 [2024-11-02 23:27:35.445885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.946 [2024-11-02 23:27:35.516910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:29.946 [2024-11-02 23:27:35.517037] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.946 [2024-11-02 23:27:35.517048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.946 [2024-11-02 23:27:35.517057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.946 [2024-11-02 23:27:35.517176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:29.946 [2024-11-02 23:27:35.517287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:29.946 [2024-11-02 23:27:35.517376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.946 [2024-11-02 23:27:35.517375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:30.515 23:27:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:30.515 23:27:36 -- common/autotest_common.sh@852 -- # return 0 00:28:30.515 23:27:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:30.515 23:27:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:30.515 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.515 23:27:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.515 23:27:36 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:30.515 23:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.515 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.515 Malloc0 00:28:30.515 23:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.515 23:27:36 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:30.515 23:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.515 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.515 [2024-11-02 23:27:36.260148] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae73c0/0xaf2dc0) succeed. 00:28:30.515 [2024-11-02 23:27:36.269551] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae89b0/0xb72e00) succeed. 00:28:30.774 23:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.774 23:27:36 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.774 23:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.774 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.774 23:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.774 23:27:36 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:30.774 23:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.774 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.774 23:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.774 23:27:36 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:30.774 23:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.774 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.774 [2024-11-02 23:27:36.408494] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:30.774 23:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.774 23:27:36 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:30.774 23:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.774 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.774 23:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.774 23:27:36 -- host/target_disconnect.sh@50 -- # reconnectpid=770212 00:28:30.774 23:27:36 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:30.774 23:27:36 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:30.774 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.681 23:27:38 -- host/target_disconnect.sh@53 -- # kill -9 769926 00:28:32.681 23:27:38 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:34.066 Write completed with error (sct=0, sc=8) 00:28:34.066 starting I/O failed 00:28:34.066 Write completed with error (sct=0, sc=8) 00:28:34.066 starting I/O failed 00:28:34.066 Read completed with error (sct=0, sc=8) 00:28:34.066 starting I/O failed 00:28:34.066 Write completed with error (sct=0, sc=8) 00:28:34.066 starting I/O failed 00:28:34.066 Read completed with error (sct=0, sc=8) 00:28:34.066 starting I/O failed 00:28:34.066 Write completed with error (sct=0, sc=8) 00:28:34.066 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Write completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 Read completed with error (sct=0, sc=8) 00:28:34.067 starting I/O failed 00:28:34.067 [2024-11-02 23:27:39.602501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.010 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 769926 Killed "${NVMF_APP[@]}" "$@" 00:28:35.010 23:27:40 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:28:35.010 23:27:40 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:35.010 23:27:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:35.010 23:27:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:35.010 23:27:40 -- common/autotest_common.sh@10 -- # set +x 00:28:35.010 23:27:40 -- nvmf/common.sh@469 -- # nvmfpid=770769 00:28:35.010 23:27:40 -- nvmf/common.sh@470 -- # waitforlisten 770769 00:28:35.010 23:27:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:35.010 23:27:40 -- common/autotest_common.sh@819 -- # '[' -z 770769 ']' 00:28:35.010 23:27:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.010 23:27:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:35.010 23:27:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.010 23:27:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:35.010 23:27:40 -- common/autotest_common.sh@10 -- # set +x 00:28:35.010 [2024-11-02 23:27:40.484529] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:35.010 [2024-11-02 23:27:40.484582] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.010 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.010 [2024-11-02 23:27:40.568685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Read completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.010 Write completed with error (sct=0, sc=8) 00:28:35.010 starting I/O failed 00:28:35.011 [2024-11-02 23:27:40.607645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.011 [2024-11-02 23:27:40.636643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.011 [2024-11-02 23:27:40.636754] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.011 [2024-11-02 23:27:40.636764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.011 [2024-11-02 23:27:40.636772] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.011 [2024-11-02 23:27:40.636916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:35.011 [2024-11-02 23:27:40.637026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:35.011 [2024-11-02 23:27:40.637134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:35.011 [2024-11-02 23:27:40.637136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:35.579 23:27:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:35.579 23:27:41 -- common/autotest_common.sh@852 -- # return 0 00:28:35.579 23:27:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:35.579 23:27:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:35.579 23:27:41 -- common/autotest_common.sh@10 -- # set +x 00:28:35.838 23:27:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.838 23:27:41 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:35.838 23:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.838 23:27:41 -- common/autotest_common.sh@10 -- # set +x 00:28:35.838 Malloc0 00:28:35.838 23:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.838 23:27:41 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:35.838 23:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.838 23:27:41 -- common/autotest_common.sh@10 -- # set +x 00:28:35.838 [2024-11-02 23:27:41.390432] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x54c3c0/0x557dc0) succeed. 00:28:35.838 [2024-11-02 23:27:41.400325] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x54d9b0/0x5d7e00) succeed. 00:28:35.838 23:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.838 23:27:41 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.838 23:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.838 23:27:41 -- common/autotest_common.sh@10 -- # set +x 00:28:35.838 23:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.838 23:27:41 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.838 23:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.838 23:27:41 -- common/autotest_common.sh@10 -- # set +x 00:28:35.838 23:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.838 23:27:41 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:35.838 23:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.838 23:27:41 -- common/autotest_common.sh@10 -- # set +x 00:28:35.838 [2024-11-02 23:27:41.538208] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:35.838 23:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.838 23:27:41 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:35.838 23:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.838 23:27:41 -- common/autotest_common.sh@10 -- # set +x 00:28:35.838 23:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.838 23:27:41 -- host/target_disconnect.sh@58 -- # wait 770212 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Write completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.098 Read completed with error (sct=0, sc=8) 00:28:36.098 starting I/O failed 00:28:36.099 Write completed with error (sct=0, sc=8) 00:28:36.099 starting I/O failed 00:28:36.099 [2024-11-02 23:27:41.612783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 [2024-11-02 23:27:41.626287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.626342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.626362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.626372] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.626389] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.636518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.646211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.646252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.646270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.646279] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.646287] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.656487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.666294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.666336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.666353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.666362] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.666374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.676615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.686342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.686384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.686401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.686410] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.686418] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.696739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.706411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.706458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.706474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.706483] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.706492] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.716809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.726423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.726457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.726473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.726483] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.726491] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.736646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.746391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.746431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.746448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.746456] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.746465] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.756997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.766642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.766684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.766701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.766710] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.766718] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.776903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.786617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.786657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.786673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.786682] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.786691] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.797007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.806582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.806627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.806645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.806654] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.806662] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.817111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.826698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.826737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.826753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.826762] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.826770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.099 [2024-11-02 23:27:41.837005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.099 qpair failed and we were unable to recover it. 00:28:36.099 [2024-11-02 23:27:41.846712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.099 [2024-11-02 23:27:41.846757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.099 [2024-11-02 23:27:41.846774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.099 [2024-11-02 23:27:41.846786] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.099 [2024-11-02 23:27:41.846795] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.359 [2024-11-02 23:27:41.857201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.359 qpair failed and we were unable to recover it. 00:28:36.359 [2024-11-02 23:27:41.866851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.359 [2024-11-02 23:27:41.866892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.359 [2024-11-02 23:27:41.866908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.359 [2024-11-02 23:27:41.866917] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.359 [2024-11-02 23:27:41.866926] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.359 [2024-11-02 23:27:41.877143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.359 qpair failed and we were unable to recover it. 00:28:36.359 [2024-11-02 23:27:41.886911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.359 [2024-11-02 23:27:41.886950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.359 [2024-11-02 23:27:41.886971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:41.886981] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:41.886989] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:41.897529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:41.907123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:41.907164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:41.907180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:41.907189] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:41.907199] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:41.917636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:41.927113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:41.927153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:41.927169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:41.927178] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:41.927186] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:41.937672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:41.947189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:41.947235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:41.947251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:41.947260] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:41.947268] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:41.957810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:41.967468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:41.967509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:41.967526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:41.967535] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:41.967544] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:41.977738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:41.987352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:41.987393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:41.987410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:41.987419] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:41.987427] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:41.997902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:42.007403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:42.007443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:42.007459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:42.007468] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:42.007477] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:42.017882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:42.027537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:42.027581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:42.027600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:42.027609] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:42.027617] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:42.037934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:42.047598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:42.047640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:42.047656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:42.047665] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:42.047673] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:42.057933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:42.067676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:42.067714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:42.067731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:42.067740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:42.067748] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:42.078105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:42.087681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:42.087720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:42.087736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:42.087745] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:42.087753] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.360 [2024-11-02 23:27:42.098018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.360 qpair failed and we were unable to recover it. 00:28:36.360 [2024-11-02 23:27:42.107791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.360 [2024-11-02 23:27:42.107836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.360 [2024-11-02 23:27:42.107853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.360 [2024-11-02 23:27:42.107862] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.360 [2024-11-02 23:27:42.107873] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.620 [2024-11-02 23:27:42.118267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.620 qpair failed and we were unable to recover it. 00:28:36.620 [2024-11-02 23:27:42.127855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.620 [2024-11-02 23:27:42.127898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.620 [2024-11-02 23:27:42.127915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.620 [2024-11-02 23:27:42.127924] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.620 [2024-11-02 23:27:42.127932] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.620 [2024-11-02 23:27:42.138274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.620 qpair failed and we were unable to recover it. 00:28:36.620 [2024-11-02 23:27:42.147859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.620 [2024-11-02 23:27:42.147896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.620 [2024-11-02 23:27:42.147912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.620 [2024-11-02 23:27:42.147921] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.620 [2024-11-02 23:27:42.147930] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.620 [2024-11-02 23:27:42.158355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.620 qpair failed and we were unable to recover it. 00:28:36.620 [2024-11-02 23:27:42.168103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.620 [2024-11-02 23:27:42.168143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.620 [2024-11-02 23:27:42.168159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.620 [2024-11-02 23:27:42.168168] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.620 [2024-11-02 23:27:42.168176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.620 [2024-11-02 23:27:42.178319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.620 qpair failed and we were unable to recover it. 00:28:36.620 [2024-11-02 23:27:42.187994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.620 [2024-11-02 23:27:42.188042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.188058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.188067] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.188075] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.198486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.208027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.208064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.208081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.208090] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.208099] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.218504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.228063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.228104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.228121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.228130] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.228138] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.238565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.248181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.248220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.248236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.248245] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.248253] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.258602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.268264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.268309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.268325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.268334] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.268342] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.278889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.288186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.288228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.288244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.288255] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.288263] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.298702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.308288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.308328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.308344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.308354] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.308362] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.318813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.328359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.328402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.328419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.328428] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.328437] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.338956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.348561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.348608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.348624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.348633] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.348641] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.621 [2024-11-02 23:27:42.358976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.621 qpair failed and we were unable to recover it. 00:28:36.621 [2024-11-02 23:27:42.368468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.621 [2024-11-02 23:27:42.368509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.621 [2024-11-02 23:27:42.368525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.621 [2024-11-02 23:27:42.368534] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.621 [2024-11-02 23:27:42.368543] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.881 [2024-11-02 23:27:42.378945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-02 23:27:42.388674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.881 [2024-11-02 23:27:42.388713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.881 [2024-11-02 23:27:42.388729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.881 [2024-11-02 23:27:42.388738] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.881 [2024-11-02 23:27:42.388747] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.881 [2024-11-02 23:27:42.399020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-02 23:27:42.408553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.881 [2024-11-02 23:27:42.408594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.881 [2024-11-02 23:27:42.408611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.881 [2024-11-02 23:27:42.408621] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.881 [2024-11-02 23:27:42.408630] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.881 [2024-11-02 23:27:42.418917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.428679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.428722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.428739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.428747] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.428756] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.439231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.448689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.448732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.448748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.448757] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.448767] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.458929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.468695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.468735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.468754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.468763] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.468771] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.478974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.488778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.488821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.488837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.488846] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.488854] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.499186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.508714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.508757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.508773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.508782] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.508790] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.519379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.528784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.528829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.528845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.528854] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.528862] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.539071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.548856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.548892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.548908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.548917] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.548929] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.559210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.568882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.568924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.568940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.568949] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.568957] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.579480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.589080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.589127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.589143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.589152] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.589161] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.599847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.609036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.609075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.609092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.609101] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.609109] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:36.882 [2024-11-02 23:27:42.619429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-02 23:27:42.629033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.882 [2024-11-02 23:27:42.629070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.882 [2024-11-02 23:27:42.629085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.882 [2024-11-02 23:27:42.629094] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.882 [2024-11-02 23:27:42.629103] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.639679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.649129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.649173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.649189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.649198] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.649206] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.659601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.669173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.669212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.669229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.669238] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.669246] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.679684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.689209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.689250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.689266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.689275] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.689283] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.699679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.709359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.709400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.709416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.709425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.709433] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.719935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.729403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.729443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.729459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.729471] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.729480] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.739895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.749498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.749537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.749553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.749562] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.749570] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.759931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.769448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.769491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.769507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.769516] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.769525] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.779988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.789579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.789620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.789636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.789644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.789653] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.800146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.809703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.809743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.809759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.809768] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.809776] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.819954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.829641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.829683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.829699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.829708] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.829716] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.840209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.849705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.849752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.849769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.849778] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.849786] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.860199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.869736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.869769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.869787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.869796] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.869805] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.143 [2024-11-02 23:27:42.880335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.143 qpair failed and we were unable to recover it. 00:28:37.143 [2024-11-02 23:27:42.889829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.143 [2024-11-02 23:27:42.889870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.143 [2024-11-02 23:27:42.889886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.143 [2024-11-02 23:27:42.889895] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.143 [2024-11-02 23:27:42.889903] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:42.900210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:42.909870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:42.909913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:42.909934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:42.909943] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:42.909952] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:42.920405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:42.929951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:42.929994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:42.930010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:42.930019] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:42.930027] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:42.940428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:42.950081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:42.950125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:42.950141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:42.950151] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:42.950160] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:42.960513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:42.970060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:42.970097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:42.970113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:42.970123] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:42.970131] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:42.980438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:42.990128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:42.990175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:42.990191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:42.990200] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:42.990208] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:43.000609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:43.010092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:43.010126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:43.010143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:43.010153] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:43.010161] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:43.020601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:43.030316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:43.030358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:43.030373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:43.030382] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:43.030391] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:43.040688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:43.050254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:43.050296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:43.050312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:43.050321] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:43.050329] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:43.060705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:43.070373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:43.070411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:43.070427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:43.070436] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:43.070444] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.404 [2024-11-02 23:27:43.080736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.404 qpair failed and we were unable to recover it. 00:28:37.404 [2024-11-02 23:27:43.090336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.404 [2024-11-02 23:27:43.090382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.404 [2024-11-02 23:27:43.090398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.404 [2024-11-02 23:27:43.090407] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.404 [2024-11-02 23:27:43.090415] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.405 [2024-11-02 23:27:43.100778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.405 qpair failed and we were unable to recover it. 00:28:37.405 [2024-11-02 23:27:43.110438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.405 [2024-11-02 23:27:43.110480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.405 [2024-11-02 23:27:43.110497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.405 [2024-11-02 23:27:43.110506] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.405 [2024-11-02 23:27:43.110514] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.405 [2024-11-02 23:27:43.120938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.405 qpair failed and we were unable to recover it. 00:28:37.405 [2024-11-02 23:27:43.130408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.405 [2024-11-02 23:27:43.130448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.405 [2024-11-02 23:27:43.130464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.405 [2024-11-02 23:27:43.130473] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.405 [2024-11-02 23:27:43.130481] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.405 [2024-11-02 23:27:43.140890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.405 qpair failed and we were unable to recover it. 00:28:37.405 [2024-11-02 23:27:43.150499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.405 [2024-11-02 23:27:43.150536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.405 [2024-11-02 23:27:43.150552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.405 [2024-11-02 23:27:43.150561] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.405 [2024-11-02 23:27:43.150569] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.160924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.170611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.170648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.170664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.170677] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.170685] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.180972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.190659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.190698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.190714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.190723] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.190731] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.201128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.210675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.210715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.210731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.210740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.210748] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.221231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.230879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.230918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.230933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.230942] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.230950] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.241702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.250855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.250890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.250907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.250916] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.250924] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.261362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.270972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.271014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.271030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.271039] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.271047] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.281396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.290933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.290979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.290996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.291005] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.291013] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.301376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.311066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.311106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.311122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.311130] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.311139] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.321410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.331166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.331199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.331215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.331223] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.331232] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.341466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.351217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.351259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.351277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.351286] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.351295] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.361573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.371270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.371311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.371327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.371336] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.371344] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.381525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.391346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.391390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.665 [2024-11-02 23:27:43.391405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.665 [2024-11-02 23:27:43.391415] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.665 [2024-11-02 23:27:43.391423] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.665 [2024-11-02 23:27:43.401639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.665 qpair failed and we were unable to recover it. 00:28:37.665 [2024-11-02 23:27:43.411390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.665 [2024-11-02 23:27:43.411433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.666 [2024-11-02 23:27:43.411450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.666 [2024-11-02 23:27:43.411460] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.666 [2024-11-02 23:27:43.411469] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.925 [2024-11-02 23:27:43.421753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.925 qpair failed and we were unable to recover it. 00:28:37.925 [2024-11-02 23:27:43.431368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.925 [2024-11-02 23:27:43.431409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.925 [2024-11-02 23:27:43.431425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.925 [2024-11-02 23:27:43.431434] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.925 [2024-11-02 23:27:43.431442] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.441867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.451467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.451506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.451522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.451531] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.451539] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.461848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.471432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.471477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.471493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.471502] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.471511] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.481944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.491650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.491689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.491704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.491713] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.491721] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.502016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.511700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.511741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.511757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.511766] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.511774] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.522117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.531865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.531903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.531923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.531932] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.531940] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.542269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.551717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.551756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.551771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.551780] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.551788] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.562215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.571932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.571988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.572005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.572014] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.572022] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.582215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.591898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.591936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.591952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.591962] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.591975] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.602234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.611944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.611993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.612010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.612020] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.612032] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.622199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.632037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.632076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.632091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.632100] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.632109] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.642534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.652115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.652157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.652173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.652183] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.652192] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:37.926 [2024-11-02 23:27:43.662513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:37.926 qpair failed and we were unable to recover it. 00:28:37.926 [2024-11-02 23:27:43.672169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.926 [2024-11-02 23:27:43.672207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.926 [2024-11-02 23:27:43.672223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.926 [2024-11-02 23:27:43.672232] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.926 [2024-11-02 23:27:43.672242] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.186 [2024-11-02 23:27:43.682578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-02 23:27:43.692237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.186 [2024-11-02 23:27:43.692276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.186 [2024-11-02 23:27:43.692292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.186 [2024-11-02 23:27:43.692301] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.186 [2024-11-02 23:27:43.692309] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.186 [2024-11-02 23:27:43.702648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-02 23:27:43.712389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.186 [2024-11-02 23:27:43.712426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.186 [2024-11-02 23:27:43.712443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.712452] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.712461] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.722706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.732337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.732382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.732397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.732406] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.732414] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.742643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.752530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.752570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.752586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.752594] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.752603] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.762687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.772575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.772616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.772632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.772641] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.772649] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.782763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.792667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.792712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.792728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.792740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.792748] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.803004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.812638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.812677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.812694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.812703] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.812711] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.822945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.832792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.832827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.832844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.832853] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.832861] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.843056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.852706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.852745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.852760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.852769] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.852777] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.863148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.872737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.872774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.872791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.872801] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.872809] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.883439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.892830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.892873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.892890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.892898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.892906] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.903298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.912927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.912965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.912995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.913005] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.913014] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.187 [2024-11-02 23:27:43.923194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-02 23:27:43.932870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.187 [2024-11-02 23:27:43.932912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.187 [2024-11-02 23:27:43.932928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.187 [2024-11-02 23:27:43.932938] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.187 [2024-11-02 23:27:43.932946] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.447 [2024-11-02 23:27:43.943218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.447 qpair failed and we were unable to recover it. 00:28:38.447 [2024-11-02 23:27:43.953031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.447 [2024-11-02 23:27:43.953073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.447 [2024-11-02 23:27:43.953090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.447 [2024-11-02 23:27:43.953099] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.447 [2024-11-02 23:27:43.953107] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.447 [2024-11-02 23:27:43.963494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.447 qpair failed and we were unable to recover it. 00:28:38.447 [2024-11-02 23:27:43.973082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.447 [2024-11-02 23:27:43.973120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.447 [2024-11-02 23:27:43.973141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.447 [2024-11-02 23:27:43.973150] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.447 [2024-11-02 23:27:43.973158] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.447 [2024-11-02 23:27:43.983463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.447 qpair failed and we were unable to recover it. 00:28:38.447 [2024-11-02 23:27:43.993141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:43.993179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:43.993195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:43.993204] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:43.993212] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.003487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.013224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.013266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.013283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.013292] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.013300] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.023640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.033290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.033329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.033345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.033353] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.033362] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.043467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.053346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.053391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.053408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.053416] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.053428] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.063591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.073256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.073295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.073312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.073320] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.073329] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.083838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.093422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.093461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.093476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.093486] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.093494] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.103743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.113519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.113565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.113581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.113590] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.113599] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.123903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.133537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.133580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.133595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.133604] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.133612] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.143988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.153602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.153641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.153659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.153668] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.153676] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.164130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.173783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.173822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.173838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.173847] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.173855] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.448 [2024-11-02 23:27:44.184195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.448 qpair failed and we were unable to recover it. 00:28:38.448 [2024-11-02 23:27:44.193719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.448 [2024-11-02 23:27:44.193756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.448 [2024-11-02 23:27:44.193772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.448 [2024-11-02 23:27:44.193781] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.448 [2024-11-02 23:27:44.193790] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.204435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.213908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.213951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.213973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.213982] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.213991] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.224397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.233881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.233918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.233933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.233946] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.233954] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.244427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.254014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.254052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.254068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.254077] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.254085] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.264537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.274028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.274064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.274080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.274089] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.274097] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.284625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.294179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.294218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.294234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.294243] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.294251] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.304571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.314193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.314236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.314252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.314261] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.314270] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.324591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.334274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.334314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.334330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.334339] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.334348] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.344680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.354320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.354363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.354378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.354388] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.354396] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.364827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.374325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.374367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.374383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.374392] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.374400] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.384785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.394459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.394502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.394518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.394527] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.394537] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.404934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.414586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.414625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.414644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.414654] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.414663] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.424945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.434542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.434579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.434595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.434604] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.434612] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.710 [2024-11-02 23:27:44.445108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.710 qpair failed and we were unable to recover it. 00:28:38.710 [2024-11-02 23:27:44.454658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.710 [2024-11-02 23:27:44.454694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.710 [2024-11-02 23:27:44.454710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.710 [2024-11-02 23:27:44.454719] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.710 [2024-11-02 23:27:44.454728] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.711 [2024-11-02 23:27:44.465019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.711 qpair failed and we were unable to recover it. 00:28:38.971 [2024-11-02 23:27:44.474705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.971 [2024-11-02 23:27:44.474749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.971 [2024-11-02 23:27:44.474765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.971 [2024-11-02 23:27:44.474774] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.971 [2024-11-02 23:27:44.474782] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.971 [2024-11-02 23:27:44.485302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-11-02 23:27:44.494691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.971 [2024-11-02 23:27:44.494734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.971 [2024-11-02 23:27:44.494749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.971 [2024-11-02 23:27:44.494758] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.971 [2024-11-02 23:27:44.494771] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.971 [2024-11-02 23:27:44.505049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-11-02 23:27:44.514800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.971 [2024-11-02 23:27:44.514837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.971 [2024-11-02 23:27:44.514853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.971 [2024-11-02 23:27:44.514862] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.971 [2024-11-02 23:27:44.514870] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.971 [2024-11-02 23:27:44.525672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.971 qpair failed and we were unable to recover it. 00:28:38.971 [2024-11-02 23:27:44.534935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.971 [2024-11-02 23:27:44.534981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.971 [2024-11-02 23:27:44.534997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.535006] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.535015] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.545305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.554901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.554940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.554956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.554965] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.554978] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.565450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.574986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.575028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.575043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.575053] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.575061] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.585295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.595040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.595080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.595095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.595104] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.595112] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.605708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.615007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.615045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.615061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.615070] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.615079] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.625722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.635129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.635168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.635184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.635193] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.635202] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.645804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.655319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.655361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.655377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.655386] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.655394] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.665567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.675374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.675414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.675431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.675443] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.675451] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.685747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.695382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.695424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.695440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.695449] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.695457] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.705831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:38.972 [2024-11-02 23:27:44.715419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.972 [2024-11-02 23:27:44.715452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.972 [2024-11-02 23:27:44.715469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.972 [2024-11-02 23:27:44.715478] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.972 [2024-11-02 23:27:44.715487] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:38.972 [2024-11-02 23:27:44.725762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.972 qpair failed and we were unable to recover it. 00:28:39.233 [2024-11-02 23:27:44.735529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.233 [2024-11-02 23:27:44.735570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.233 [2024-11-02 23:27:44.735586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.233 [2024-11-02 23:27:44.735596] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.233 [2024-11-02 23:27:44.735605] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.233 [2024-11-02 23:27:44.745739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.233 qpair failed and we were unable to recover it. 00:28:39.233 [2024-11-02 23:27:44.755550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.233 [2024-11-02 23:27:44.755586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.233 [2024-11-02 23:27:44.755603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.233 [2024-11-02 23:27:44.755612] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.233 [2024-11-02 23:27:44.755620] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.233 [2024-11-02 23:27:44.766148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.233 qpair failed and we were unable to recover it. 00:28:39.233 [2024-11-02 23:27:44.775595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.233 [2024-11-02 23:27:44.775631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.233 [2024-11-02 23:27:44.775647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.233 [2024-11-02 23:27:44.775656] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.233 [2024-11-02 23:27:44.775665] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.233 [2024-11-02 23:27:44.786075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.233 qpair failed and we were unable to recover it. 00:28:39.233 [2024-11-02 23:27:44.795658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.233 [2024-11-02 23:27:44.795696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.233 [2024-11-02 23:27:44.795712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.233 [2024-11-02 23:27:44.795721] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.233 [2024-11-02 23:27:44.795730] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.233 [2024-11-02 23:27:44.806057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.233 qpair failed and we were unable to recover it. 00:28:39.233 [2024-11-02 23:27:44.815649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.233 [2024-11-02 23:27:44.815692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.233 [2024-11-02 23:27:44.815709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.233 [2024-11-02 23:27:44.815718] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.233 [2024-11-02 23:27:44.815727] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.233 [2024-11-02 23:27:44.825943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.233 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.835787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.835831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.835848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.835857] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.835865] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.846220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.855748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.855783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.855804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.855813] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.855821] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.866307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.875924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.875964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.875985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.875994] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.876002] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.886385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.895994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.896035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.896052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.896061] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.896070] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.906377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.916007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.916051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.916068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.916078] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.916087] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.926563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.936034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.936074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.936090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.936099] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.936107] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.946444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.956092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.956129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.956145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.956154] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.956162] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.966629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.234 [2024-11-02 23:27:44.976260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.234 [2024-11-02 23:27:44.976302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.234 [2024-11-02 23:27:44.976318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.234 [2024-11-02 23:27:44.976327] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.234 [2024-11-02 23:27:44.976335] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.234 [2024-11-02 23:27:44.986642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.234 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:44.996292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:44.996337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:44.996352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.495 [2024-11-02 23:27:44.996361] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.495 [2024-11-02 23:27:44.996370] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.495 [2024-11-02 23:27:45.006832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.495 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:45.016318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:45.016359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:45.016375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.495 [2024-11-02 23:27:45.016384] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.495 [2024-11-02 23:27:45.016393] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.495 [2024-11-02 23:27:45.026785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.495 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:45.036382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:45.036421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:45.036438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.495 [2024-11-02 23:27:45.036447] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.495 [2024-11-02 23:27:45.036455] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.495 [2024-11-02 23:27:45.047006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.495 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:45.056511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:45.056555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:45.056571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.495 [2024-11-02 23:27:45.056580] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.495 [2024-11-02 23:27:45.056589] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.495 [2024-11-02 23:27:45.066917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.495 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:45.076454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:45.076493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:45.076510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.495 [2024-11-02 23:27:45.076519] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.495 [2024-11-02 23:27:45.076527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.495 [2024-11-02 23:27:45.086906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.495 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:45.096514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:45.096559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:45.096575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.495 [2024-11-02 23:27:45.096584] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.495 [2024-11-02 23:27:45.096592] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.495 [2024-11-02 23:27:45.106918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.495 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:45.116541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:45.116579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:45.116596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.495 [2024-11-02 23:27:45.116608] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.495 [2024-11-02 23:27:45.116617] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.495 [2024-11-02 23:27:45.127159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.495 qpair failed and we were unable to recover it. 00:28:39.495 [2024-11-02 23:27:45.136640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.495 [2024-11-02 23:27:45.136685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.495 [2024-11-02 23:27:45.136702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.496 [2024-11-02 23:27:45.136711] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.496 [2024-11-02 23:27:45.136719] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.496 [2024-11-02 23:27:45.147094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.496 qpair failed and we were unable to recover it. 00:28:39.496 [2024-11-02 23:27:45.156729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.496 [2024-11-02 23:27:45.156769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.496 [2024-11-02 23:27:45.156785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.496 [2024-11-02 23:27:45.156794] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.496 [2024-11-02 23:27:45.156802] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.496 [2024-11-02 23:27:45.167646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.496 qpair failed and we were unable to recover it. 00:28:39.496 [2024-11-02 23:27:45.176725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.496 [2024-11-02 23:27:45.176766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.496 [2024-11-02 23:27:45.176782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.496 [2024-11-02 23:27:45.176791] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.496 [2024-11-02 23:27:45.176799] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.496 [2024-11-02 23:27:45.187193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.496 qpair failed and we were unable to recover it. 00:28:39.496 [2024-11-02 23:27:45.196770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.496 [2024-11-02 23:27:45.196812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.496 [2024-11-02 23:27:45.196828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.496 [2024-11-02 23:27:45.196837] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.496 [2024-11-02 23:27:45.196845] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.496 [2024-11-02 23:27:45.207225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.496 qpair failed and we were unable to recover it. 00:28:39.496 [2024-11-02 23:27:45.216841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.496 [2024-11-02 23:27:45.216883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.496 [2024-11-02 23:27:45.216899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.496 [2024-11-02 23:27:45.216908] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.496 [2024-11-02 23:27:45.216917] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.496 [2024-11-02 23:27:45.227273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.496 qpair failed and we were unable to recover it. 00:28:39.496 [2024-11-02 23:27:45.236895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.496 [2024-11-02 23:27:45.236940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.496 [2024-11-02 23:27:45.236957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.496 [2024-11-02 23:27:45.236971] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.496 [2024-11-02 23:27:45.236980] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.496 [2024-11-02 23:27:45.247202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.496 qpair failed and we were unable to recover it. 00:28:39.756 [2024-11-02 23:27:45.256988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.756 [2024-11-02 23:27:45.257030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.756 [2024-11-02 23:27:45.257046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.756 [2024-11-02 23:27:45.257055] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.756 [2024-11-02 23:27:45.257064] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.756 [2024-11-02 23:27:45.267348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.277032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.277077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.277093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.277102] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.277111] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.287502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.297133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.297174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.297193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.297202] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.297211] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.307361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.317236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.317278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.317295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.317304] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.317312] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.327579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.337361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.337398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.337414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.337423] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.337431] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.347542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.357282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.357315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.357331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.357340] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.357348] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.367806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.377314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.377354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.377370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.377379] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.377387] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.387735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.397487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.397527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.397543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.397551] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.397560] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.407711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.417620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.417657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.417673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.417682] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.417690] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.427805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.437559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.437599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.437615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.437624] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.437632] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.447878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.457629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.457667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.457683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.457692] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.457701] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.468024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.477745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.477783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.477802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.477811] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.477819] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.487990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:39.757 [2024-11-02 23:27:45.497701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.757 [2024-11-02 23:27:45.497739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.757 [2024-11-02 23:27:45.497755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.757 [2024-11-02 23:27:45.497763] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.757 [2024-11-02 23:27:45.497772] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:39.757 [2024-11-02 23:27:45.508221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.757 qpair failed and we were unable to recover it. 00:28:40.018 [2024-11-02 23:27:45.517932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.018 [2024-11-02 23:27:45.517979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.018 [2024-11-02 23:27:45.517995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.018 [2024-11-02 23:27:45.518004] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.018 [2024-11-02 23:27:45.518013] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.018 [2024-11-02 23:27:45.528139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.018 qpair failed and we were unable to recover it. 00:28:40.018 [2024-11-02 23:27:45.537806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.018 [2024-11-02 23:27:45.537846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.018 [2024-11-02 23:27:45.537862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.018 [2024-11-02 23:27:45.537871] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.018 [2024-11-02 23:27:45.537879] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.018 [2024-11-02 23:27:45.548215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.018 qpair failed and we were unable to recover it. 00:28:40.018 [2024-11-02 23:27:45.557858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.018 [2024-11-02 23:27:45.557901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.018 [2024-11-02 23:27:45.557916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.018 [2024-11-02 23:27:45.557925] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.018 [2024-11-02 23:27:45.557937] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.018 [2024-11-02 23:27:45.568260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.018 qpair failed and we were unable to recover it. 00:28:40.018 [2024-11-02 23:27:45.577928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.018 [2024-11-02 23:27:45.577980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.018 [2024-11-02 23:27:45.577996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.018 [2024-11-02 23:27:45.578004] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.018 [2024-11-02 23:27:45.578013] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.018 [2024-11-02 23:27:45.588297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.018 qpair failed and we were unable to recover it. 00:28:40.018 [2024-11-02 23:27:45.597956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.018 [2024-11-02 23:27:45.598003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.018 [2024-11-02 23:27:45.598019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.018 [2024-11-02 23:27:45.598028] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.018 [2024-11-02 23:27:45.598036] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.018 [2024-11-02 23:27:45.608348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.018 qpair failed and we were unable to recover it. 00:28:40.018 [2024-11-02 23:27:45.618148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.018 [2024-11-02 23:27:45.618190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.018 [2024-11-02 23:27:45.618206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.018 [2024-11-02 23:27:45.618215] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.018 [2024-11-02 23:27:45.618223] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.018 [2024-11-02 23:27:45.628107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.018 qpair failed and we were unable to recover it. 00:28:40.018 [2024-11-02 23:27:45.638078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.018 [2024-11-02 23:27:45.638126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.018 [2024-11-02 23:27:45.638142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.018 [2024-11-02 23:27:45.638151] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.018 [2024-11-02 23:27:45.638159] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.018 [2024-11-02 23:27:45.648595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.019 qpair failed and we were unable to recover it. 00:28:40.019 [2024-11-02 23:27:45.658163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.019 [2024-11-02 23:27:45.658198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.019 [2024-11-02 23:27:45.658214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.019 [2024-11-02 23:27:45.658223] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.019 [2024-11-02 23:27:45.658231] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.019 [2024-11-02 23:27:45.668649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.019 qpair failed and we were unable to recover it. 00:28:40.019 [2024-11-02 23:27:45.678244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.019 [2024-11-02 23:27:45.678287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.019 [2024-11-02 23:27:45.678303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.019 [2024-11-02 23:27:45.678312] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.019 [2024-11-02 23:27:45.678320] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.019 [2024-11-02 23:27:45.688574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.019 qpair failed and we were unable to recover it. 00:28:40.019 [2024-11-02 23:27:45.698293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.019 [2024-11-02 23:27:45.698331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.019 [2024-11-02 23:27:45.698347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.019 [2024-11-02 23:27:45.698356] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.019 [2024-11-02 23:27:45.698364] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.019 [2024-11-02 23:27:45.708530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.019 qpair failed and we were unable to recover it. 00:28:40.019 [2024-11-02 23:27:45.718412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.019 [2024-11-02 23:27:45.718450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.019 [2024-11-02 23:27:45.718467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.019 [2024-11-02 23:27:45.718476] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.019 [2024-11-02 23:27:45.718484] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.019 [2024-11-02 23:27:45.728809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.019 qpair failed and we were unable to recover it. 00:28:40.019 [2024-11-02 23:27:45.738444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.019 [2024-11-02 23:27:45.738480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.019 [2024-11-02 23:27:45.738495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.019 [2024-11-02 23:27:45.738507] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.019 [2024-11-02 23:27:45.738516] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.019 [2024-11-02 23:27:45.748742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.019 qpair failed and we were unable to recover it. 00:28:40.019 [2024-11-02 23:27:45.758507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.019 [2024-11-02 23:27:45.758544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.019 [2024-11-02 23:27:45.758560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.019 [2024-11-02 23:27:45.758568] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.019 [2024-11-02 23:27:45.758577] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.019 [2024-11-02 23:27:45.768799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.019 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.778410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.778452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.778468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.778477] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.778486] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.788852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.798503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.798549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.798565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.798574] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.798583] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.809272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.818549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.818586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.818603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.818612] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.818621] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.829058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.838604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.838641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.838657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.838666] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.838674] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.849138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.858736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.858774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.858791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.858800] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.858808] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.869073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.878752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.878794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.878811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.878821] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.878829] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.889249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.898869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.898912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.898928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.898937] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.898945] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.909346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.918934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.918977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.918999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.919009] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.919018] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.929386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.938982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.939022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.939041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.939051] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.939061] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.949418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.958994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.959038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.959054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.959063] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.959072] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.969348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.979162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.979205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.280 [2024-11-02 23:27:45.979221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.280 [2024-11-02 23:27:45.979230] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.280 [2024-11-02 23:27:45.979238] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.280 [2024-11-02 23:27:45.989422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.280 qpair failed and we were unable to recover it. 00:28:40.280 [2024-11-02 23:27:45.999179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.280 [2024-11-02 23:27:45.999225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.281 [2024-11-02 23:27:45.999241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.281 [2024-11-02 23:27:45.999251] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.281 [2024-11-02 23:27:45.999262] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.281 [2024-11-02 23:27:46.009698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.281 qpair failed and we were unable to recover it. 00:28:40.281 [2024-11-02 23:27:46.019215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.281 [2024-11-02 23:27:46.019257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.281 [2024-11-02 23:27:46.019274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.281 [2024-11-02 23:27:46.019283] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.281 [2024-11-02 23:27:46.019291] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.281 [2024-11-02 23:27:46.029655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.281 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.039283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.039324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.039340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.039349] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.039358] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.049743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.059304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.059343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.059359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.059368] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.059376] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.069750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.079453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.079486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.079502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.079511] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.079519] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.089736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.099471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.099512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.099528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.099537] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.099545] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.109751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.119533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.119572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.119589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.119598] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.119606] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.130075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.139619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.139657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.139674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.139683] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.139691] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.149923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.159666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.159701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.159717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.159726] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.159734] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.170110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.179728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.179765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.179781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.179794] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.179802] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.190239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.199785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.199825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.199842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.199851] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.199859] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.210359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.219849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.219893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.219909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.219917] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.219926] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.542 [2024-11-02 23:27:46.230311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.542 qpair failed and we were unable to recover it. 00:28:40.542 [2024-11-02 23:27:46.239878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.542 [2024-11-02 23:27:46.239916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.542 [2024-11-02 23:27:46.239932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.542 [2024-11-02 23:27:46.239941] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.542 [2024-11-02 23:27:46.239949] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.543 [2024-11-02 23:27:46.250493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.543 qpair failed and we were unable to recover it. 00:28:40.543 [2024-11-02 23:27:46.259910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.543 [2024-11-02 23:27:46.259952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.543 [2024-11-02 23:27:46.259974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.543 [2024-11-02 23:27:46.259984] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.543 [2024-11-02 23:27:46.259992] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.543 [2024-11-02 23:27:46.270400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.543 qpair failed and we were unable to recover it. 00:28:40.543 [2024-11-02 23:27:46.279973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.543 [2024-11-02 23:27:46.280013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.543 [2024-11-02 23:27:46.280029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.543 [2024-11-02 23:27:46.280038] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.543 [2024-11-02 23:27:46.280047] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.543 [2024-11-02 23:27:46.290436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.543 qpair failed and we were unable to recover it. 00:28:40.803 [2024-11-02 23:27:46.300022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.803 [2024-11-02 23:27:46.300071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.803 [2024-11-02 23:27:46.300087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.803 [2024-11-02 23:27:46.300096] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.803 [2024-11-02 23:27:46.300105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.803 [2024-11-02 23:27:46.310417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.803 qpair failed and we were unable to recover it. 00:28:40.803 [2024-11-02 23:27:46.320203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.803 [2024-11-02 23:27:46.320237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.803 [2024-11-02 23:27:46.320254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.803 [2024-11-02 23:27:46.320262] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.803 [2024-11-02 23:27:46.320271] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.803 [2024-11-02 23:27:46.330504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.803 qpair failed and we were unable to recover it. 00:28:40.803 [2024-11-02 23:27:46.340351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.803 [2024-11-02 23:27:46.340394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.803 [2024-11-02 23:27:46.340410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.803 [2024-11-02 23:27:46.340419] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.803 [2024-11-02 23:27:46.340427] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.803 [2024-11-02 23:27:46.350677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.803 qpair failed and we were unable to recover it. 00:28:40.803 [2024-11-02 23:27:46.360300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.803 [2024-11-02 23:27:46.360340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.803 [2024-11-02 23:27:46.360359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.803 [2024-11-02 23:27:46.360367] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.803 [2024-11-02 23:27:46.360376] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.803 [2024-11-02 23:27:46.370988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.803 qpair failed and we were unable to recover it. 00:28:40.803 [2024-11-02 23:27:46.380440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.803 [2024-11-02 23:27:46.380480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.803 [2024-11-02 23:27:46.380496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.803 [2024-11-02 23:27:46.380505] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.803 [2024-11-02 23:27:46.380513] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.803 [2024-11-02 23:27:46.390828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.803 qpair failed and we were unable to recover it. 00:28:40.803 [2024-11-02 23:27:46.400500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.803 [2024-11-02 23:27:46.400539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.803 [2024-11-02 23:27:46.400554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.803 [2024-11-02 23:27:46.400563] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.803 [2024-11-02 23:27:46.400571] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.803 [2024-11-02 23:27:46.411156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.803 qpair failed and we were unable to recover it. 00:28:40.803 [2024-11-02 23:27:46.420582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.803 [2024-11-02 23:27:46.420622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.803 [2024-11-02 23:27:46.420638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.803 [2024-11-02 23:27:46.420647] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.803 [2024-11-02 23:27:46.420656] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.803 [2024-11-02 23:27:46.431044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.803 qpair failed and we were unable to recover it. 00:28:40.804 [2024-11-02 23:27:46.440620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.804 [2024-11-02 23:27:46.440666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.804 [2024-11-02 23:27:46.440682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.804 [2024-11-02 23:27:46.440691] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.804 [2024-11-02 23:27:46.440702] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.804 [2024-11-02 23:27:46.451529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.804 qpair failed and we were unable to recover it. 00:28:40.804 [2024-11-02 23:27:46.460601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.804 [2024-11-02 23:27:46.460642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.804 [2024-11-02 23:27:46.460657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.804 [2024-11-02 23:27:46.460666] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.804 [2024-11-02 23:27:46.460674] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.804 [2024-11-02 23:27:46.471175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.804 qpair failed and we were unable to recover it. 00:28:40.804 [2024-11-02 23:27:46.480665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.804 [2024-11-02 23:27:46.480704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.804 [2024-11-02 23:27:46.480720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.804 [2024-11-02 23:27:46.480729] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.804 [2024-11-02 23:27:46.480737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.804 [2024-11-02 23:27:46.491323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.804 qpair failed and we were unable to recover it. 00:28:40.804 [2024-11-02 23:27:46.500730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.804 [2024-11-02 23:27:46.500769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.804 [2024-11-02 23:27:46.500785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.804 [2024-11-02 23:27:46.500794] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.804 [2024-11-02 23:27:46.500802] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.804 [2024-11-02 23:27:46.511325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.804 qpair failed and we were unable to recover it. 00:28:40.804 [2024-11-02 23:27:46.520758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.804 [2024-11-02 23:27:46.520796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.804 [2024-11-02 23:27:46.520812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.804 [2024-11-02 23:27:46.520821] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.804 [2024-11-02 23:27:46.520829] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.804 [2024-11-02 23:27:46.531309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.804 qpair failed and we were unable to recover it. 00:28:40.804 [2024-11-02 23:27:46.540852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.804 [2024-11-02 23:27:46.540894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.804 [2024-11-02 23:27:46.540910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.804 [2024-11-02 23:27:46.540919] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.804 [2024-11-02 23:27:46.540927] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:40.804 [2024-11-02 23:27:46.551447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.804 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.561034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.561075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.561090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.561100] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.561108] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:41.065 [2024-11-02 23:27:46.571472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.580949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.580994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.581010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.581019] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.581028] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:41.065 [2024-11-02 23:27:46.591427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.601099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.601142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.601158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.601166] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.601175] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:41.065 [2024-11-02 23:27:46.611634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.621071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.621112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.621127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.621142] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.621151] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:41.065 [2024-11-02 23:27:46.631611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.641147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.641191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.641206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.641215] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.641224] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:41.065 [2024-11-02 23:27:46.651582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.661315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.661357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.661373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.661382] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.661390] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:41.065 [2024-11-02 23:27:46.671671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.671749] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:41.065 A controller has encountered a failure and is being reset. 00:28:41.065 [2024-11-02 23:27:46.681335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.681378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.681405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.681420] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.681432] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:41.065 [2024-11-02 23:27:46.691784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.701328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.065 [2024-11-02 23:27:46.701365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.065 [2024-11-02 23:27:46.701382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.065 [2024-11-02 23:27:46.701391] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.065 [2024-11-02 23:27:46.701403] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:41.065 [2024-11-02 23:27:46.711727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:41.065 qpair failed and we were unable to recover it. 00:28:41.065 [2024-11-02 23:27:46.711891] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:41.065 [2024-11-02 23:27:46.713858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:41.065 Controller properly reset. 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Read completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 Write completed with error (sct=0, sc=8) 00:28:42.006 starting I/O failed 00:28:42.006 [2024-11-02 23:27:47.736569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:42.266 Initializing NVMe Controllers 00:28:42.266 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.266 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.266 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:42.266 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:42.266 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:42.266 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:42.266 Initialization complete. Launching workers. 00:28:42.266 Starting thread on core 1 00:28:42.266 Starting thread on core 2 00:28:42.266 Starting thread on core 3 00:28:42.266 Starting thread on core 0 00:28:42.266 23:27:47 -- host/target_disconnect.sh@59 -- # sync 00:28:42.266 00:28:42.266 real 0m12.472s 00:28:42.266 user 0m27.100s 00:28:42.266 sys 0m3.015s 00:28:42.266 23:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.266 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:28:42.266 ************************************ 00:28:42.266 END TEST nvmf_target_disconnect_tc2 00:28:42.266 ************************************ 00:28:42.266 23:27:47 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:28:42.266 23:27:47 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:28:42.266 23:27:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:42.266 23:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:42.266 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:28:42.266 ************************************ 00:28:42.266 START TEST nvmf_target_disconnect_tc3 00:28:42.266 ************************************ 00:28:42.266 23:27:47 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc3 00:28:42.266 23:27:47 -- host/target_disconnect.sh@65 -- # reconnectpid=772138 00:28:42.266 23:27:47 -- host/target_disconnect.sh@67 -- # sleep 2 00:28:42.266 23:27:47 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:28:42.266 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.174 23:27:49 -- host/target_disconnect.sh@68 -- # kill -9 770769 00:28:44.174 23:27:49 -- host/target_disconnect.sh@70 -- # sleep 2 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Read completed with error (sct=0, sc=8) 00:28:45.554 starting I/O failed 00:28:45.554 Write completed with error (sct=0, sc=8) 00:28:45.555 starting I/O failed 00:28:45.555 [2024-11-02 23:27:51.021070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.220 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 770769 Killed "${NVMF_APP[@]}" "$@" 00:28:46.220 23:27:51 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:28:46.220 23:27:51 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:46.220 23:27:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:46.220 23:27:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:46.220 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:46.220 23:27:51 -- nvmf/common.sh@469 -- # nvmfpid=772699 00:28:46.220 23:27:51 -- nvmf/common.sh@470 -- # waitforlisten 772699 00:28:46.220 23:27:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:46.220 23:27:51 -- common/autotest_common.sh@819 -- # '[' -z 772699 ']' 00:28:46.220 23:27:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.220 23:27:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:46.220 23:27:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.221 23:27:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:46.221 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:46.221 [2024-11-02 23:27:51.901488] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:46.221 [2024-11-02 23:27:51.901540] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.221 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.507 [2024-11-02 23:27:51.987762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Write completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 Read completed with error (sct=0, sc=8) 00:28:46.507 starting I/O failed 00:28:46.507 [2024-11-02 23:27:52.026241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.507 [2024-11-02 23:27:52.057304] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:46.507 [2024-11-02 23:27:52.057416] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.507 [2024-11-02 23:27:52.057427] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.507 [2024-11-02 23:27:52.057436] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.507 [2024-11-02 23:27:52.057576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:46.507 [2024-11-02 23:27:52.057686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:46.507 [2024-11-02 23:27:52.057795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:46.507 [2024-11-02 23:27:52.057793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:47.076 23:27:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:47.076 23:27:52 -- common/autotest_common.sh@852 -- # return 0 00:28:47.076 23:27:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:47.076 23:27:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:47.076 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:47.076 23:27:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.076 23:27:52 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.076 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.076 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:47.076 Malloc0 00:28:47.076 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.076 23:27:52 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:47.076 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.076 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:47.076 [2024-11-02 23:27:52.808743] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14953c0/0x14a0dc0) succeed. 00:28:47.076 [2024-11-02 23:27:52.818141] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14969b0/0x1520e00) succeed. 00:28:47.335 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.335 23:27:52 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.335 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.335 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:47.335 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.335 23:27:52 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.335 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.335 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:47.335 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.335 23:27:52 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:28:47.335 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.335 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:47.335 [2024-11-02 23:27:52.956481] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:28:47.335 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.335 23:27:52 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:28:47.335 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.335 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:47.335 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.335 23:27:52 -- host/target_disconnect.sh@73 -- # wait 772138 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Read completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 Write completed with error (sct=0, sc=8) 00:28:47.335 starting I/O failed 00:28:47.335 [2024-11-02 23:27:53.031242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Write completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 Read completed with error (sct=0, sc=8) 00:28:48.714 starting I/O failed 00:28:48.714 [2024-11-02 23:27:54.036275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.714 [2024-11-02 23:27:54.037839] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:48.714 [2024-11-02 23:27:54.037857] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:48.714 [2024-11-02 23:27:54.037873] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:49.649 [2024-11-02 23:27:55.041761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-11-02 23:27:55.043269] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:49.649 [2024-11-02 23:27:55.043285] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:49.649 [2024-11-02 23:27:55.043294] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:50.587 [2024-11-02 23:27:56.047169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-11-02 23:27:56.048932] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:50.587 [2024-11-02 23:27:56.048948] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:50.587 [2024-11-02 23:27:56.048956] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:51.525 [2024-11-02 23:27:57.052776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-11-02 23:27:57.054282] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.525 [2024-11-02 23:27:57.054298] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.525 [2024-11-02 23:27:57.054306] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:52.464 [2024-11-02 23:27:58.058186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:52.464 qpair failed and we were unable to recover it. 00:28:52.464 [2024-11-02 23:27:58.059665] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:52.464 [2024-11-02 23:27:58.059681] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:52.464 [2024-11-02 23:27:58.059688] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:53.401 [2024-11-02 23:27:59.063687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:53.401 qpair failed and we were unable to recover it. 00:28:53.401 [2024-11-02 23:27:59.065058] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:53.401 [2024-11-02 23:27:59.065074] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:53.401 [2024-11-02 23:27:59.065082] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:54.337 [2024-11-02 23:28:00.069219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.337 qpair failed and we were unable to recover it. 00:28:54.337 [2024-11-02 23:28:00.071093] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:54.337 [2024-11-02 23:28:00.071123] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:54.337 [2024-11-02 23:28:00.071135] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:55.713 [2024-11-02 23:28:01.074997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-02 23:28:01.076568] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:55.713 [2024-11-02 23:28:01.076585] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:55.713 [2024-11-02 23:28:01.076593] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:56.649 [2024-11-02 23:28:02.080444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.649 qpair failed and we were unable to recover it. 00:28:56.649 [2024-11-02 23:28:02.080591] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:56.649 A controller has encountered a failure and is being reset. 00:28:56.649 Resorting to new failover address 192.168.100.9 00:28:56.650 [2024-11-02 23:28:02.082556] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:56.650 [2024-11-02 23:28:02.082584] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:56.650 [2024-11-02 23:28:02.082596] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:57.586 [2024-11-02 23:28:03.086498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-11-02 23:28:03.088002] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:57.586 [2024-11-02 23:28:03.088019] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:57.586 [2024-11-02 23:28:03.088027] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:58.522 [2024-11-02 23:28:04.091829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.522 qpair failed and we were unable to recover it. 00:28:58.522 [2024-11-02 23:28:04.091958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.523 [2024-11-02 23:28:04.092096] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:58.523 [2024-11-02 23:28:04.123698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:58.523 Controller properly reset. 00:28:58.523 Initializing NVMe Controllers 00:28:58.523 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.523 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:58.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:58.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:58.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:58.523 Initialization complete. Launching workers. 00:28:58.523 Starting thread on core 1 00:28:58.523 Starting thread on core 2 00:28:58.523 Starting thread on core 3 00:28:58.523 Starting thread on core 0 00:28:58.523 23:28:04 -- host/target_disconnect.sh@74 -- # sync 00:28:58.523 00:28:58.523 real 0m16.364s 00:28:58.523 user 0m53.702s 00:28:58.523 sys 0m5.057s 00:28:58.523 23:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.523 23:28:04 -- common/autotest_common.sh@10 -- # set +x 00:28:58.523 ************************************ 00:28:58.523 END TEST nvmf_target_disconnect_tc3 00:28:58.523 ************************************ 00:28:58.523 23:28:04 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:58.523 23:28:04 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:28:58.523 23:28:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:58.523 23:28:04 -- nvmf/common.sh@116 -- # sync 00:28:58.523 23:28:04 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:58.523 23:28:04 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:58.523 23:28:04 -- nvmf/common.sh@119 -- # set +e 00:28:58.523 23:28:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:58.523 23:28:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:58.523 rmmod nvme_rdma 00:28:58.523 rmmod nvme_fabrics 00:28:58.783 23:28:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:58.783 23:28:04 -- nvmf/common.sh@123 -- # set -e 00:28:58.783 23:28:04 -- nvmf/common.sh@124 -- # return 0 00:28:58.783 23:28:04 -- nvmf/common.sh@477 -- # '[' -n 772699 ']' 00:28:58.783 23:28:04 -- nvmf/common.sh@478 -- # killprocess 772699 00:28:58.783 23:28:04 -- common/autotest_common.sh@926 -- # '[' -z 772699 ']' 00:28:58.783 23:28:04 -- common/autotest_common.sh@930 -- # kill -0 772699 00:28:58.783 23:28:04 -- common/autotest_common.sh@931 -- # uname 00:28:58.783 23:28:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:58.783 23:28:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 772699 00:28:58.783 23:28:04 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:28:58.783 23:28:04 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:28:58.783 23:28:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 772699' 00:28:58.783 killing process with pid 772699 00:28:58.783 23:28:04 -- common/autotest_common.sh@945 -- # kill 772699 00:28:58.783 23:28:04 -- common/autotest_common.sh@950 -- # wait 772699 00:28:59.043 23:28:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:59.043 23:28:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:59.043 00:28:59.043 real 0m37.244s 00:28:59.043 user 2m13.284s 00:28:59.043 sys 0m13.851s 00:28:59.043 23:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.043 23:28:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.043 ************************************ 00:28:59.043 END TEST nvmf_target_disconnect 00:28:59.043 ************************************ 00:28:59.043 23:28:04 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:28:59.043 23:28:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:59.043 23:28:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.043 23:28:04 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:59.043 00:28:59.043 real 21m12.245s 00:28:59.043 user 68m1.063s 00:28:59.043 sys 4m53.745s 00:28:59.043 23:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.043 23:28:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.043 ************************************ 00:28:59.043 END TEST nvmf_rdma 00:28:59.043 ************************************ 00:28:59.043 23:28:04 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:59.043 23:28:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:59.043 23:28:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.043 23:28:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.043 ************************************ 00:28:59.043 START TEST spdkcli_nvmf_rdma 00:28:59.043 ************************************ 00:28:59.043 23:28:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:59.302 * Looking for test storage... 00:28:59.302 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:28:59.302 23:28:04 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:28:59.302 23:28:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:59.302 23:28:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:28:59.302 23:28:04 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.302 23:28:04 -- nvmf/common.sh@7 -- # uname -s 00:28:59.302 23:28:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.302 23:28:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.302 23:28:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.302 23:28:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.302 23:28:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.302 23:28:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.302 23:28:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.303 23:28:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.303 23:28:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.303 23:28:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.303 23:28:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:59.303 23:28:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:59.303 23:28:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.303 23:28:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.303 23:28:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.303 23:28:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:59.303 23:28:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.303 23:28:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.303 23:28:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.303 23:28:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.303 23:28:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.303 23:28:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.303 23:28:04 -- paths/export.sh@5 -- # export PATH 00:28:59.303 23:28:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.303 23:28:04 -- nvmf/common.sh@46 -- # : 0 00:28:59.303 23:28:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:59.303 23:28:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:59.303 23:28:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:59.303 23:28:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.303 23:28:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.303 23:28:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:59.303 23:28:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:59.303 23:28:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:59.303 23:28:04 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:59.303 23:28:04 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:59.303 23:28:04 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:59.303 23:28:04 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:59.303 23:28:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:59.303 23:28:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.303 23:28:04 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:59.303 23:28:04 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=774977 00:28:59.303 23:28:04 -- spdkcli/common.sh@34 -- # waitforlisten 774977 00:28:59.303 23:28:04 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:59.303 23:28:04 -- common/autotest_common.sh@819 -- # '[' -z 774977 ']' 00:28:59.303 23:28:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.303 23:28:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:59.303 23:28:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.303 23:28:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:59.303 23:28:04 -- common/autotest_common.sh@10 -- # set +x 00:28:59.303 [2024-11-02 23:28:04.983902] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:59.303 [2024-11-02 23:28:04.983959] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774977 ] 00:28:59.303 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.303 [2024-11-02 23:28:05.054878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:59.562 [2024-11-02 23:28:05.126241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:59.562 [2024-11-02 23:28:05.126412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.562 [2024-11-02 23:28:05.126414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.129 23:28:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.129 23:28:05 -- common/autotest_common.sh@852 -- # return 0 00:29:00.129 23:28:05 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:00.129 23:28:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:00.130 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:29:00.130 23:28:05 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:00.130 23:28:05 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:00.130 23:28:05 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:00.130 23:28:05 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:00.130 23:28:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.130 23:28:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:00.130 23:28:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:00.130 23:28:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:00.130 23:28:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.130 23:28:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:00.130 23:28:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.130 23:28:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:00.130 23:28:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:00.130 23:28:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:00.130 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:29:08.251 23:28:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:08.251 23:28:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:08.251 23:28:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:08.251 23:28:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:08.251 23:28:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:08.251 23:28:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:08.251 23:28:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:08.251 23:28:12 -- nvmf/common.sh@294 -- # net_devs=() 00:29:08.251 23:28:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:08.251 23:28:12 -- nvmf/common.sh@295 -- # e810=() 00:29:08.251 23:28:12 -- nvmf/common.sh@295 -- # local -ga e810 00:29:08.251 23:28:12 -- nvmf/common.sh@296 -- # x722=() 00:29:08.251 23:28:12 -- nvmf/common.sh@296 -- # local -ga x722 00:29:08.251 23:28:12 -- nvmf/common.sh@297 -- # mlx=() 00:29:08.251 23:28:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:08.251 23:28:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.251 23:28:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:08.251 23:28:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:08.251 23:28:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:08.251 23:28:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:08.251 23:28:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:08.251 23:28:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:08.251 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:08.251 23:28:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:08.251 23:28:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:08.251 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:08.251 23:28:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:08.251 23:28:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:08.251 23:28:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.251 23:28:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:08.251 23:28:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.251 23:28:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:08.251 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:08.251 23:28:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.251 23:28:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.251 23:28:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:08.251 23:28:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.251 23:28:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:08.251 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:08.251 23:28:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.251 23:28:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:08.251 23:28:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:08.251 23:28:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:08.251 23:28:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:08.251 23:28:12 -- nvmf/common.sh@57 -- # uname 00:29:08.251 23:28:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:08.251 23:28:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:08.251 23:28:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:08.251 23:28:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:08.251 23:28:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:08.251 23:28:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:08.251 23:28:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:08.251 23:28:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:08.251 23:28:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:08.251 23:28:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:08.251 23:28:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:08.251 23:28:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:08.251 23:28:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:08.251 23:28:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:08.251 23:28:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:08.251 23:28:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:08.251 23:28:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:08.251 23:28:12 -- nvmf/common.sh@104 -- # continue 2 00:29:08.251 23:28:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:08.251 23:28:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:08.251 23:28:12 -- nvmf/common.sh@104 -- # continue 2 00:29:08.251 23:28:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:08.251 23:28:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:08.251 23:28:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:08.251 23:28:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:08.251 23:28:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:08.251 23:28:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:08.251 23:28:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:08.251 23:28:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:08.251 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:08.251 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:08.251 altname enp217s0f0np0 00:29:08.251 altname ens818f0np0 00:29:08.251 inet 192.168.100.8/24 scope global mlx_0_0 00:29:08.251 valid_lft forever preferred_lft forever 00:29:08.251 23:28:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:08.251 23:28:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:08.251 23:28:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:08.251 23:28:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:08.251 23:28:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:08.251 23:28:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:08.251 23:28:12 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:08.251 23:28:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:08.251 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:08.251 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:08.251 altname enp217s0f1np1 00:29:08.251 altname ens818f1np1 00:29:08.251 inet 192.168.100.9/24 scope global mlx_0_1 00:29:08.251 valid_lft forever preferred_lft forever 00:29:08.251 23:28:12 -- nvmf/common.sh@410 -- # return 0 00:29:08.251 23:28:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:08.251 23:28:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:08.251 23:28:12 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:08.251 23:28:12 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:08.251 23:28:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:08.251 23:28:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:08.251 23:28:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:08.251 23:28:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:08.251 23:28:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:08.251 23:28:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:08.252 23:28:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:08.252 23:28:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:08.252 23:28:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:08.252 23:28:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:08.252 23:28:12 -- nvmf/common.sh@104 -- # continue 2 00:29:08.252 23:28:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:08.252 23:28:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:08.252 23:28:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:08.252 23:28:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:08.252 23:28:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:08.252 23:28:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:08.252 23:28:12 -- nvmf/common.sh@104 -- # continue 2 00:29:08.252 23:28:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:08.252 23:28:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:08.252 23:28:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:08.252 23:28:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:08.252 23:28:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:08.252 23:28:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:08.252 23:28:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:08.252 23:28:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:08.252 23:28:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:08.252 23:28:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:08.252 23:28:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:08.252 23:28:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:08.252 23:28:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:08.252 192.168.100.9' 00:29:08.252 23:28:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:08.252 192.168.100.9' 00:29:08.252 23:28:12 -- nvmf/common.sh@445 -- # head -n 1 00:29:08.252 23:28:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:08.252 23:28:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:08.252 192.168.100.9' 00:29:08.252 23:28:12 -- nvmf/common.sh@446 -- # tail -n +2 00:29:08.252 23:28:12 -- nvmf/common.sh@446 -- # head -n 1 00:29:08.252 23:28:12 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:08.252 23:28:12 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:08.252 23:28:12 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:08.252 23:28:12 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:08.252 23:28:12 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:08.252 23:28:12 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:08.252 23:28:12 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:08.252 23:28:12 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:08.252 23:28:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:08.252 23:28:12 -- common/autotest_common.sh@10 -- # set +x 00:29:08.252 23:28:12 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:08.252 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:08.252 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:08.252 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:08.252 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:08.252 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:08.252 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:08.252 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:08.252 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:08.252 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:08.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:08.252 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:08.252 ' 00:29:08.252 [2024-11-02 23:28:13.060268] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:09.629 [2024-11-02 23:28:15.123626] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc756e0/0xc77a00) succeed. 00:29:09.629 [2024-11-02 23:28:15.133660] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc76dc0/0xcb90a0) succeed. 00:29:11.007 [2024-11-02 23:28:16.392492] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:12.912 [2024-11-02 23:28:18.583442] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:14.816 [2024-11-02 23:28:20.469800] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:16.194 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:16.194 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:16.194 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:16.194 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:16.194 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:16.194 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:16.194 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:16.194 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:16.194 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:16.194 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:16.194 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:16.194 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:16.454 23:28:22 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:16.454 23:28:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:16.454 23:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:16.454 23:28:22 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:16.454 23:28:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:16.454 23:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:16.454 23:28:22 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:16.454 23:28:22 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:16.713 23:28:22 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:16.972 23:28:22 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:16.972 23:28:22 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:16.972 23:28:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:16.972 23:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:16.972 23:28:22 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:16.972 23:28:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:16.972 23:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:16.972 23:28:22 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:16.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:16.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:16.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:16.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:16.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:16.972 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:16.972 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:16.972 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:16.972 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:16.972 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:16.972 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:16.972 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:16.972 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:16.972 ' 00:29:22.342 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:22.342 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:22.342 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:22.342 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:22.342 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:29:22.342 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:29:22.342 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:22.342 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:22.342 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:22.342 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:22.342 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:22.342 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:22.342 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:22.342 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:22.342 23:28:27 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:22.342 23:28:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:22.342 23:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:22.342 23:28:27 -- spdkcli/nvmf.sh@90 -- # killprocess 774977 00:29:22.342 23:28:27 -- common/autotest_common.sh@926 -- # '[' -z 774977 ']' 00:29:22.342 23:28:27 -- common/autotest_common.sh@930 -- # kill -0 774977 00:29:22.342 23:28:27 -- common/autotest_common.sh@931 -- # uname 00:29:22.342 23:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:22.342 23:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 774977 00:29:22.342 23:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:22.342 23:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:22.342 23:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 774977' 00:29:22.342 killing process with pid 774977 00:29:22.342 23:28:27 -- common/autotest_common.sh@945 -- # kill 774977 00:29:22.342 [2024-11-02 23:28:27.630308] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:22.342 23:28:27 -- common/autotest_common.sh@950 -- # wait 774977 00:29:22.342 23:28:27 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:29:22.342 23:28:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:22.342 23:28:27 -- nvmf/common.sh@116 -- # sync 00:29:22.342 23:28:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:22.342 23:28:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:22.342 23:28:27 -- nvmf/common.sh@119 -- # set +e 00:29:22.342 23:28:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:22.342 23:28:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:22.342 rmmod nvme_rdma 00:29:22.342 rmmod nvme_fabrics 00:29:22.342 23:28:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:22.342 23:28:27 -- nvmf/common.sh@123 -- # set -e 00:29:22.342 23:28:27 -- nvmf/common.sh@124 -- # return 0 00:29:22.342 23:28:27 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:29:22.342 23:28:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:22.342 23:28:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:22.342 00:29:22.342 real 0m23.149s 00:29:22.342 user 0m49.235s 00:29:22.342 sys 0m6.035s 00:29:22.342 23:28:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.342 23:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:22.342 ************************************ 00:29:22.342 END TEST spdkcli_nvmf_rdma 00:29:22.343 ************************************ 00:29:22.343 23:28:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:22.343 23:28:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:22.343 23:28:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:22.343 23:28:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:22.343 23:28:27 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:22.343 23:28:27 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:22.343 23:28:27 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:22.343 23:28:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:22.343 23:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:22.343 23:28:27 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:22.343 23:28:27 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:22.343 23:28:27 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:22.343 23:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:28.912 INFO: APP EXITING 00:29:28.912 INFO: killing all VMs 00:29:28.912 INFO: killing vhost app 00:29:28.912 INFO: EXIT DONE 00:29:31.447 Waiting for block devices as requested 00:29:31.447 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:31.447 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:31.447 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:31.447 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:31.706 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:31.706 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:31.706 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:31.706 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:31.966 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:31.966 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:31.966 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:32.225 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:32.225 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:32.225 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:32.484 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:32.484 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:32.484 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:36.679 Cleaning 00:29:36.679 Removing: /var/run/dpdk/spdk0/config 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:36.679 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:36.679 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:36.679 Removing: /var/run/dpdk/spdk1/config 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:36.679 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:36.679 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:36.679 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:36.679 Removing: /var/run/dpdk/spdk2/config 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:36.679 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:36.679 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:36.679 Removing: /var/run/dpdk/spdk3/config 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:36.679 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:36.679 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:36.679 Removing: /var/run/dpdk/spdk4/config 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:36.679 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:36.679 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:36.679 Removing: /dev/shm/bdevperf_trace.pid604958 00:29:36.679 Removing: /dev/shm/bdevperf_trace.pid699052 00:29:36.679 Removing: /dev/shm/bdev_svc_trace.1 00:29:36.679 Removing: /dev/shm/nvmf_trace.0 00:29:36.679 Removing: /dev/shm/spdk_tgt_trace.pid441009 00:29:36.679 Removing: /var/run/dpdk/spdk0 00:29:36.679 Removing: /var/run/dpdk/spdk1 00:29:36.679 Removing: /var/run/dpdk/spdk2 00:29:36.679 Removing: /var/run/dpdk/spdk3 00:29:36.679 Removing: /var/run/dpdk/spdk4 00:29:36.679 Removing: /var/run/dpdk/spdk_pid438469 00:29:36.679 Removing: /var/run/dpdk/spdk_pid439749 00:29:36.679 Removing: /var/run/dpdk/spdk_pid441009 00:29:36.679 Removing: /var/run/dpdk/spdk_pid441721 00:29:36.679 Removing: /var/run/dpdk/spdk_pid446853 00:29:36.679 Removing: /var/run/dpdk/spdk_pid448326 00:29:36.679 Removing: /var/run/dpdk/spdk_pid448645 00:29:36.679 Removing: /var/run/dpdk/spdk_pid448963 00:29:36.679 Removing: /var/run/dpdk/spdk_pid449305 00:29:36.679 Removing: /var/run/dpdk/spdk_pid449626 00:29:36.679 Removing: /var/run/dpdk/spdk_pid449923 00:29:36.679 Removing: /var/run/dpdk/spdk_pid450209 00:29:36.679 Removing: /var/run/dpdk/spdk_pid450519 00:29:36.679 Removing: /var/run/dpdk/spdk_pid451382 00:29:36.679 Removing: /var/run/dpdk/spdk_pid454575 00:29:36.679 Removing: /var/run/dpdk/spdk_pid454874 00:29:36.679 Removing: /var/run/dpdk/spdk_pid455184 00:29:36.679 Removing: /var/run/dpdk/spdk_pid455407 00:29:36.679 Removing: /var/run/dpdk/spdk_pid455765 00:29:36.679 Removing: /var/run/dpdk/spdk_pid456033 00:29:36.679 Removing: /var/run/dpdk/spdk_pid456607 00:29:36.679 Removing: /var/run/dpdk/spdk_pid456665 00:29:36.679 Removing: /var/run/dpdk/spdk_pid457016 00:29:36.679 Removing: /var/run/dpdk/spdk_pid457188 00:29:36.679 Removing: /var/run/dpdk/spdk_pid457480 00:29:36.679 Removing: /var/run/dpdk/spdk_pid457503 00:29:36.679 Removing: /var/run/dpdk/spdk_pid458124 00:29:36.679 Removing: /var/run/dpdk/spdk_pid458399 00:29:36.679 Removing: /var/run/dpdk/spdk_pid458666 00:29:36.679 Removing: /var/run/dpdk/spdk_pid458904 00:29:36.679 Removing: /var/run/dpdk/spdk_pid459065 00:29:36.679 Removing: /var/run/dpdk/spdk_pid459126 00:29:36.679 Removing: /var/run/dpdk/spdk_pid459399 00:29:36.679 Removing: /var/run/dpdk/spdk_pid459682 00:29:36.679 Removing: /var/run/dpdk/spdk_pid459949 00:29:36.679 Removing: /var/run/dpdk/spdk_pid460229 00:29:36.679 Removing: /var/run/dpdk/spdk_pid460450 00:29:36.679 Removing: /var/run/dpdk/spdk_pid460764 00:29:36.679 Removing: /var/run/dpdk/spdk_pid460966 00:29:36.679 Removing: /var/run/dpdk/spdk_pid461338 00:29:36.679 Removing: /var/run/dpdk/spdk_pid461871 00:29:36.679 Removing: /var/run/dpdk/spdk_pid462202 00:29:36.679 Removing: /var/run/dpdk/spdk_pid462475 00:29:36.679 Removing: /var/run/dpdk/spdk_pid462756 00:29:36.679 Removing: /var/run/dpdk/spdk_pid463030 00:29:36.679 Removing: /var/run/dpdk/spdk_pid463296 00:29:36.679 Removing: /var/run/dpdk/spdk_pid463466 00:29:36.679 Removing: /var/run/dpdk/spdk_pid463687 00:29:36.679 Removing: /var/run/dpdk/spdk_pid463897 00:29:36.679 Removing: /var/run/dpdk/spdk_pid464178 00:29:36.679 Removing: /var/run/dpdk/spdk_pid464450 00:29:36.679 Removing: /var/run/dpdk/spdk_pid464733 00:29:36.679 Removing: /var/run/dpdk/spdk_pid465005 00:29:36.679 Removing: /var/run/dpdk/spdk_pid465294 00:29:36.679 Removing: /var/run/dpdk/spdk_pid465560 00:29:36.679 Removing: /var/run/dpdk/spdk_pid465777 00:29:36.679 Removing: /var/run/dpdk/spdk_pid465956 00:29:36.679 Removing: /var/run/dpdk/spdk_pid466170 00:29:36.679 Removing: /var/run/dpdk/spdk_pid466422 00:29:36.679 Removing: /var/run/dpdk/spdk_pid466713 00:29:36.679 Removing: /var/run/dpdk/spdk_pid466979 00:29:36.679 Removing: /var/run/dpdk/spdk_pid467260 00:29:36.679 Removing: /var/run/dpdk/spdk_pid467538 00:29:36.679 Removing: /var/run/dpdk/spdk_pid467820 00:29:36.679 Removing: /var/run/dpdk/spdk_pid468038 00:29:36.679 Removing: /var/run/dpdk/spdk_pid468267 00:29:36.679 Removing: /var/run/dpdk/spdk_pid468451 00:29:36.679 Removing: /var/run/dpdk/spdk_pid468709 00:29:36.679 Removing: /var/run/dpdk/spdk_pid468968 00:29:36.679 Removing: /var/run/dpdk/spdk_pid469253 00:29:36.679 Removing: /var/run/dpdk/spdk_pid469521 00:29:36.679 Removing: /var/run/dpdk/spdk_pid469813 00:29:36.679 Removing: /var/run/dpdk/spdk_pid469887 00:29:36.679 Removing: /var/run/dpdk/spdk_pid470283 00:29:36.679 Removing: /var/run/dpdk/spdk_pid474332 00:29:36.679 Removing: /var/run/dpdk/spdk_pid570812 00:29:36.679 Removing: /var/run/dpdk/spdk_pid575085 00:29:36.679 Removing: /var/run/dpdk/spdk_pid586076 00:29:36.679 Removing: /var/run/dpdk/spdk_pid591333 00:29:36.679 Removing: /var/run/dpdk/spdk_pid594963 00:29:36.679 Removing: /var/run/dpdk/spdk_pid595756 00:29:36.679 Removing: /var/run/dpdk/spdk_pid604958 00:29:36.679 Removing: /var/run/dpdk/spdk_pid605410 00:29:36.679 Removing: /var/run/dpdk/spdk_pid609460 00:29:36.679 Removing: /var/run/dpdk/spdk_pid615376 00:29:36.679 Removing: /var/run/dpdk/spdk_pid618146 00:29:36.679 Removing: /var/run/dpdk/spdk_pid628415 00:29:36.679 Removing: /var/run/dpdk/spdk_pid653516 00:29:36.679 Removing: /var/run/dpdk/spdk_pid657210 00:29:36.679 Removing: /var/run/dpdk/spdk_pid662319 00:29:36.679 Removing: /var/run/dpdk/spdk_pid696942 00:29:36.679 Removing: /var/run/dpdk/spdk_pid697996 00:29:36.679 Removing: /var/run/dpdk/spdk_pid699052 00:29:36.679 Removing: /var/run/dpdk/spdk_pid703422 00:29:36.679 Removing: /var/run/dpdk/spdk_pid710600 00:29:36.679 Removing: /var/run/dpdk/spdk_pid711437 00:29:36.679 Removing: /var/run/dpdk/spdk_pid712446 00:29:36.939 Removing: /var/run/dpdk/spdk_pid713340 00:29:36.939 Removing: /var/run/dpdk/spdk_pid713808 00:29:36.939 Removing: /var/run/dpdk/spdk_pid718200 00:29:36.939 Removing: /var/run/dpdk/spdk_pid718275 00:29:36.939 Removing: /var/run/dpdk/spdk_pid722753 00:29:36.939 Removing: /var/run/dpdk/spdk_pid723336 00:29:36.939 Removing: /var/run/dpdk/spdk_pid724113 00:29:36.939 Removing: /var/run/dpdk/spdk_pid725166 00:29:36.939 Removing: /var/run/dpdk/spdk_pid725305 00:29:36.939 Removing: /var/run/dpdk/spdk_pid727738 00:29:36.939 Removing: /var/run/dpdk/spdk_pid729674 00:29:36.939 Removing: /var/run/dpdk/spdk_pid731559 00:29:36.939 Removing: /var/run/dpdk/spdk_pid733498 00:29:36.939 Removing: /var/run/dpdk/spdk_pid735383 00:29:36.939 Removing: /var/run/dpdk/spdk_pid737273 00:29:36.939 Removing: /var/run/dpdk/spdk_pid743456 00:29:36.939 Removing: /var/run/dpdk/spdk_pid744122 00:29:36.939 Removing: /var/run/dpdk/spdk_pid746436 00:29:36.939 Removing: /var/run/dpdk/spdk_pid747510 00:29:36.939 Removing: /var/run/dpdk/spdk_pid754502 00:29:36.939 Removing: /var/run/dpdk/spdk_pid757445 00:29:36.939 Removing: /var/run/dpdk/spdk_pid763458 00:29:36.939 Removing: /var/run/dpdk/spdk_pid763734 00:29:36.939 Removing: /var/run/dpdk/spdk_pid769641 00:29:36.939 Removing: /var/run/dpdk/spdk_pid770212 00:29:36.939 Removing: /var/run/dpdk/spdk_pid772138 00:29:36.939 Removing: /var/run/dpdk/spdk_pid774977 00:29:36.939 Clean 00:29:36.939 killing process with pid 388797 00:29:55.036 killing process with pid 388794 00:29:55.036 killing process with pid 388796 00:29:55.036 killing process with pid 388795 00:29:55.036 23:28:58 -- common/autotest_common.sh@1436 -- # return 0 00:29:55.036 23:28:58 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:55.036 23:28:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:55.036 23:28:58 -- common/autotest_common.sh@10 -- # set +x 00:29:55.036 23:28:58 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:55.036 23:28:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:55.036 23:28:58 -- common/autotest_common.sh@10 -- # set +x 00:29:55.036 23:28:58 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:29:55.036 23:28:58 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:29:55.036 23:28:58 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:29:55.036 23:28:58 -- spdk/autotest.sh@394 -- # hash lcov 00:29:55.036 23:28:58 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:55.036 23:28:58 -- spdk/autotest.sh@396 -- # hostname 00:29:55.036 23:28:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:29:55.036 geninfo: WARNING: invalid characters removed from testname! 00:30:13.126 23:29:17 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:13.695 23:29:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:15.599 23:29:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:16.977 23:29:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:18.355 23:29:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:19.734 23:29:25 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:21.640 23:29:26 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:21.640 23:29:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:21.640 23:29:27 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:21.640 23:29:27 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.640 23:29:27 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.640 23:29:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.640 23:29:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.640 23:29:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.640 23:29:27 -- paths/export.sh@5 -- $ export PATH 00:30:21.640 23:29:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.640 23:29:27 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:30:21.640 23:29:27 -- common/autobuild_common.sh@440 -- $ date +%s 00:30:21.640 23:29:27 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730586567.XXXXXX 00:30:21.640 23:29:27 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730586567.0TDkdz 00:30:21.640 23:29:27 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:30:21.640 23:29:27 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:30:21.640 23:29:27 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:30:21.640 23:29:27 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:21.640 23:29:27 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:21.640 23:29:27 -- common/autobuild_common.sh@456 -- $ get_config_params 00:30:21.640 23:29:27 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:21.640 23:29:27 -- common/autotest_common.sh@10 -- $ set +x 00:30:21.640 23:29:27 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:21.640 23:29:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:21.640 23:29:27 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:21.640 23:29:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:21.640 23:29:27 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:21.640 23:29:27 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:21.640 23:29:27 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:21.640 23:29:27 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:21.640 23:29:27 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:21.640 23:29:27 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:21.640 23:29:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:21.640 + [[ -n 346476 ]] 00:30:21.640 + sudo kill 346476 00:30:21.650 [Pipeline] } 00:30:21.665 [Pipeline] // stage 00:30:21.671 [Pipeline] } 00:30:21.686 [Pipeline] // timeout 00:30:21.691 [Pipeline] } 00:30:21.705 [Pipeline] // catchError 00:30:21.710 [Pipeline] } 00:30:21.725 [Pipeline] // wrap 00:30:21.731 [Pipeline] } 00:30:21.744 [Pipeline] // catchError 00:30:21.754 [Pipeline] stage 00:30:21.756 [Pipeline] { (Epilogue) 00:30:21.769 [Pipeline] catchError 00:30:21.771 [Pipeline] { 00:30:21.784 [Pipeline] echo 00:30:21.786 Cleanup processes 00:30:21.791 [Pipeline] sh 00:30:22.077 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:22.077 796346 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:22.091 [Pipeline] sh 00:30:22.376 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:22.376 ++ grep -v 'sudo pgrep' 00:30:22.376 ++ awk '{print $1}' 00:30:22.376 + sudo kill -9 00:30:22.376 + true 00:30:22.387 [Pipeline] sh 00:30:22.672 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:22.672 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:29.246 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:31.901 [Pipeline] sh 00:30:32.185 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:32.185 Artifacts sizes are good 00:30:32.198 [Pipeline] archiveArtifacts 00:30:32.204 Archiving artifacts 00:30:32.336 [Pipeline] sh 00:30:32.619 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:30:32.633 [Pipeline] cleanWs 00:30:32.642 [WS-CLEANUP] Deleting project workspace... 00:30:32.642 [WS-CLEANUP] Deferred wipeout is used... 00:30:32.648 [WS-CLEANUP] done 00:30:32.649 [Pipeline] } 00:30:32.664 [Pipeline] // catchError 00:30:32.674 [Pipeline] sh 00:30:32.955 + logger -p user.info -t JENKINS-CI 00:30:32.964 [Pipeline] } 00:30:32.976 [Pipeline] // stage 00:30:32.981 [Pipeline] } 00:30:32.995 [Pipeline] // node 00:30:33.000 [Pipeline] End of Pipeline 00:30:33.039 Finished: SUCCESS